scispace - formally typeset
Search or ask a question
Author

Mohamed Ghoneim

Bio: Mohamed Ghoneim is an academic researcher from Umm al-Qura University. The author has contributed to research in topics: Computer science & Encryption. The author has an hindex of 5, co-authored 12 publications receiving 60 citations. Previous affiliations of Mohamed Ghoneim include Mansoura University & Damietta University.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the improved tanϕ(ξ)2-expansion, first integral, and G′G2expansion methods are used to extract a novel class of optical solitons in the quadratic-cubic nonlinear medium.
Abstract: In this work, the nonlinear Schrodinger’s equation is studied for birefringent fibers incorporating four-wave mixing. The improved tanϕ(ξ)2-expansion, first integral, and G′G2-expansion methods are used to extract a novel class of optical solitons in the quadratic-cubic nonlinear medium. The extracted solutions are dark, periodic, singular, and dark-singular, along with other soliton solutions. These solutions are listed with their respective existence criteria. The recommended computational methods here are uncomplicated, outspoken, and consistent and minimize the computational work size, which give it a wide range of applicability. A detailed comparison with the results that already exist is also presented.

20 citations

Journal ArticleDOI
TL;DR: A semantic model (CKRMCC) based on cognitive aspects that enables cognitive computer to process the knowledge as the human mind and find a suitable representation of that knowledge is proposed.
Abstract: The accumulating data are easy to store but the ability of understanding and using it does not keep track with its growth So researches focus on the nature of knowledge processing in the mind This paper proposes a semantic model (CKRMCC) based on cognitive aspects that enables cognitive computer to process the knowledge as the human mind and find a suitable representation of that knowledge In cognitive computer, knowledge processing passes through three major stages: knowledge acquisition and encoding, knowledge representation, and knowledge inference and validation The core of CKRMCC is knowledge representation, which in turn proceeds through four phases: prototype formation phase, discrimination phase, generalization phase, and algorithm development phase Each of those phases is mathematically formulated using the notions of real-time process algebra The performance efficiency of CKRMCC is evaluated using some datasets from the well-known UCI repository of machine learning datasets The acquired datasets are divided into training and testing data that are encoded using concept matrix Consequently, in the knowledge representation stage, a set of symbolic rule is derived to establish a suitable representation for the training datasets This representation will be available in a usable form when it is needed in the future The inference stage uses the rule set to obtain the classes of the encoded testing datasets Finally, knowledge validation phase is validating and verifying the results of applying the rule set on testing datasets The performances are compared with classification and regression tree and support vector machine and prove that CKRMCC has an efficient performance in representing the knowledge using symbolic rules

14 citations

Proceedings ArticleDOI
01 Dec 2006
TL;DR: This scheme applies SNOW 2 stream cipher to JPEG 2000 codestreams in a way that preserves most of the inherent flexibility, scalability, and transcodability of encrypted JPEG 2000 images and also preserves end-to-end security.
Abstract: In this paper we propose a progressive encryption and controlled access scheme for JPEG 2000 encoded images. Our scheme applies SNOW 2 stream cipher to JPEG 2000 codestreams in a way that preserves most of the inherent flexibility of JPEG 2000 encoded images and enables untrusted intermediate network transcoders to downstream an encrypted JPEG 2000 image without access to decryption keys. Our scheme can also control access to various image resolutions or quality layers, by granting users different levels of access, using different decryption keys. Our scheme preservers most of the inherent flexibility, scalability, and transcodability of encrypted JPEG 2000 images and also preserves end-to-end security.

11 citations

Journal ArticleDOI
TL;DR: In this paper , a novel framework using Al-Biruni Earth radius (BER) optimization-based stochastic fractal search (BERSFS) is proposed to fine-tune the deep CNN layers for classifying monkeypox disease from images.
Abstract: Human skin diseases have become increasingly prevalent in recent decades, with millions of individuals in developed countries experiencing monkeypox. Such conditions often carry less obvious but no less devastating risks, including increased vulnerability to monkeypox, cancer, and low self-esteem. Due to the low visual resolution of monkeypox disease images, medical specialists with high-level tools are typically required for a proper diagnosis. The manual diagnosis of monkeypox disease is subjective, time-consuming, and labor-intensive. Therefore, it is necessary to create a computer-aided approach for the automated diagnosis of monkeypox disease. Most research articles on monkeypox disease relied on convolutional neural networks (CNNs) and using classical loss functions, allowing them to pick up discriminative elements in monkeypox images. To enhance this, a novel framework using Al-Biruni Earth radius (BER) optimization-based stochastic fractal search (BERSFS) is proposed to fine-tune the deep CNN layers for classifying monkeypox disease from images. As a first step in the proposed approach, we use deep CNN-based models to learn the embedding of input images in Euclidean space. In the second step, we use an optimized classification model based on the triplet loss function to calculate the distance between pairs of images in Euclidean space and learn features that may be used to distinguish between different cases, including monkeypox cases. The proposed approach uses images of human skin diseases obtained from an African hospital. The experimental results of the study demonstrate the proposed framework’s efficacy, as it outperforms numerous examples of prior research on skin disease problems. On the other hand, statistical experiments with Wilcoxon and analysis of variance (ANOVA) tests are conducted to evaluate the proposed approach in terms of effectiveness and stability. The recorded results confirm the superiority of the proposed method when compared with other optimization algorithms and machine learning models.

10 citations

Proceedings ArticleDOI
09 Jul 2006
TL;DR: DCT domain watermarking schemes proved to be very resilient against many attacks except for the geometric attack, especially rotation, so problems were tackled in this research to be overcome by using dual detection, which uses an equivalent form of the spatial version of the watermark embedded in the DCT domain as detection technique.
Abstract: In this paper, the discrete cosine transform domain (DCT domain) watermarking technique for copyright protection of still digital images is analyze and new watermarking method with dual detection is proposed The DCT is applied in blocks of 8 times 8 pixels as in the JPEG algorithm Previous DCT domain watermarking schemes proved to be very resilient against many attacks except for the geometric attack, especially rotation Thus, these problems were tackled in our research to be overcome by using dual detection, which uses an equivalent form of the spatial version of the watermark embedded in the DCT domain as detection technique In order to test the robustness of the algorithms, simple attacks such as filtering, JPEG compression, rotation, resizing and cropping were exploited

9 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: The Computational Brain this paper provides a broad overview of neuroscience and computational theory, followed by a study of some of the most recent and sophisticated modeling work in the context of relevant neurobiological research.

1,472 citations

Journal ArticleDOI
TL;DR: A new book enPDFd foundations of cognitive science that can be a new way to explore the knowledge and get one thing to always remember in every reading time, even step by step is shown.
Abstract: Spend your time even for only few minutes to read a book. Reading a book will never reduce and waste your time to be useless. Reading, for some people become a need that is to do every day such as spending time for eating. Now, what about you? Do you like to read a book? Now, we will show you a new book enPDFd foundations of cognitive science that can be a new way to explore the knowledge. When reading this book, you can get one thing to always remember in every reading time, even step by step.

195 citations

Proceedings ArticleDOI
01 Dec 2015
TL;DR: Extensive experimental results demonstrate that the proposed approach provides excellent annotation with accuracy 99.5 % and can lead to tighter connection between agriculture specialists and computer system, yielding more effective and reliable results.
Abstract: The study described in this paper consists of a method that applies gabor wavelet transform technique to extract relevant features related to image of tomato leaf in conjunction with using Support Vector Machines (SVMs) with alternate kernel functions in order to detect and identify type of disease that infects tomato plant. Initially, we collected real samples of diseased tomato leaves, next we isolated each leaf in single image, wavelet based feature technique has been employed to identify an optimal feature subset. Finally, a support vector machine classier with different kernel functions including Cauchy kernel, Invmult Kernel and Laplacian Kernel was employed to evaluate the ability of this approach to detect and identify where tomato leaf infected with Powdery mildew or early blight. To evaluate the performance of presented approach, we present tests on dataset consisted of 100 images for each type of tomato diseases. Extensive experimental results demonstrate that the proposed approach provides excellent annotation with accuracy 99.5 %. Efficient result obtained from the proposed approach can lead to tighter connection between agriculture specialists and computer system, yielding more effective and reliable results.

90 citations

Journal ArticleDOI
TL;DR: The aim of this paper’s aim exceeds the idea of just finding the traveling wave solution of the considered model and researches to compare the used schemes’ accuracy by applying the quintic-B-Spline scheme and the convergence between three methods.
Abstract: This paper investigates the analytical solutions of the well-known nonlinear Schrodinger (NLS) equation with the higher-order through three members of Kudryashov methods (the original Kudryashov method, modified Kudryashov method, and generalized Kudryashov method). The considered model is also known as the sub-10-fs-pulse propagation model used to describe these measurements’ implications for creating even shorter pulses. We also discuss the problem of validating these measurements. Previous measurements of such short pulses using techniques. This paper’s aim exceeds the idea of just finding the traveling wave solution of the considered model. Still, it researches to compare the used schemes’ accuracy by applying the quintic-B-Spline scheme and the convergence between three methods. Many distinct and novel solutions have been obtained and sketched, along with different techniques to show more details of the model’s dynamical behavior. Finally, the matching between analytical and numerical schemes has been shown through some tables and figures.

41 citations