scispace - formally typeset
Search or ask a question
Author

Hamid A. Jalab

Bio: Hamid A. Jalab is an academic researcher from Information Technology University. The author has contributed to research in topics: Fractional calculus & Image segmentation. The author has an hindex of 27, co-authored 155 publications receiving 2260 citations. Previous affiliations of Hamid A. Jalab include University of Malaya & Sana'a University.


Papers
More filters
Posted Content
TL;DR: A three algorithm of multimedia encryption schemes have been proposed in the literature and description and aimed at achieving an efficiency, flexibility and security, which is a challenge of researchers.
Abstract: With the rapid development of various multimedia technologies, more and more multimedia data are generated and transmitted in the medical, also the internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information .Besides that, digital documents are also easy to copy and distribute, therefore it will be faced by many threats. It is a big security and privacy issue, it become necessary to find appropriate protection because of the significance, accuracy and sensitivity of the information. , which may include some sensitive information which should not be accessed by or can only be partially exposed to the general users. Therefore, security and privacy has become an important. Another problem with digital document and video is that undetectable modifications can be made with very simple and widely available equipment, which put the digital material for evidential purposes under question. Cryptography considers one of the techniques which used to protect the important information. In this paper a three algorithm of multimedia encryption schemes have been proposed in the literature and description. The New Comparative Study between DES, 3DES and AES within Nine Factors achieving an efficiency, flexibility and security, which is a challenge of researchers.

169 citations

Journal ArticleDOI
01 May 2020-Entropy
TL;DR: This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans.
Abstract: Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists' efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.

110 citations

Proceedings ArticleDOI
15 Nov 2011
TL;DR: A content-based image retrieval (CBIR) system using the image features extracted by a color layout descriptor and Gabor texture descriptor showed an improved performance compared with the performance using other CBIR methods.
Abstract: The current paper presents a content-based image retrieval (CBIR) system using the image features extracted by a color layout descriptor (CLD) and Gabor texture descriptor. CLD represents the spatial distribution of colors with a few nonlinear quantized DCT coefficients of grid-based average colors, whereas the Gabor filter works as a bandpass filter for the local spatial frequency distribution. These two descriptors are very powerful for CBIR systems. Furthermore, combining the color and texture features in CBIR systems leads to more accurate results for image retrieval. To compare the performance of image retrieval method, average precision and recall are computed for all queries. The results showed an improved performance (higher precision and recall values) compared with the performance using other CBIR methods.

81 citations

Journal ArticleDOI
TL;DR: The obtained results proved that the combination of the deep learning approach and the handcrafted features extracted by MGLCM improves the accuracy of classification of the SVM classifier up to 99.30%.
Abstract: Progresses in the areas of artificial intelligence, machine learning, and medical imaging technologies have allowed the development of the medical image processing field with some astonishing results in the last two decades. These innovations enabled the clinicians to view the human body in high-resolution or three-dimensional cross-sectional slices, which resulted in an increase in the accuracy of the diagnosis and the examination of patients in a non-invasive manner. The fundamental step for magnetic resonance imaging (MRI) brain scans classifiers is their ability to extract meaningful features. As a result, many works have proposed different methods for features extraction to classify the abnormal growths in the brain MRI scans. More recently, the application of deep learning algorithms to medical imaging leads to impressive performance enhancements in classifying and diagnosing complicated pathologies, such as brain tumors. In this paper, a deep learning feature extraction algorithm is proposed to extract the relevant features from MRI brain scans. In parallel, handcrafted features are extracted using the modified gray level co-occurrence matrix (MGLCM) method. Subsequently, the extracted relevant features are combined with handcrafted features to improve the classification process of MRI brain scans with support vector machine (SVM) used as the classifier. The obtained results proved that the combination of the deep learning approach and the handcrafted features extracted by MGLCM improves the accuracy of classification of the SVM classifier up to 99.30%.

76 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 2002

9,314 citations

Book ChapterDOI
01 Jan 2015

3,828 citations

Journal ArticleDOI
01 Jul 1939-Nature
TL;DR: Chandrasekhar et al. as mentioned in this paper used the internal constitution of the stars to give a classical account of his own researches and of the general state of the theory at that time.
Abstract: EDDINGTON'S “Internal Constitution of the Stars” was published in 1926 and gives what now ranks as a classical account of his own researches and of the general state of the theory at that time. Since then, a tremendous amount of work has appeared. Much of it has to do with the construction of stellar models with different equations of state applying in different zones. Other parts deal with the effects of varying chemical composition, with pulsation and tidal and rotational distortion of stars, and with the precise relations between the interior and the atmosphere of a star. The striking feature of all this work is that so much can be done without assuming any particular mechanism of stellar energy-generation. Only such very comprehensive assumptions are made about the distribution and behaviour of the energy sources that we may expect future knowledge of their mechanism to lead mainly to more detailed results within the framework of the existing general theory. An Introduction to the Study of Stellar Structure By S. Chandrasekhar. (Astrophysical Monographs sponsored by The Astrophysical Journal.) Pp. ix+509. (Chicago: University of Chicago Press; London: Cambridge University Press, 1939.) 50s. net.

1,368 citations