Mohammed Hazim Alkawaz
Bio: Mohammed Hazim Alkawaz is an academic researcher from Management and Science University. The author has contributed to research in topics: Computer facial animation & Facial expression. The author has an hindex of 14, co-authored 71 publications receiving 676 citations. Previous affiliations of Mohammed Hazim Alkawaz include Universiti Teknologi Malaysia & University of Mosul.
TL;DR: A new hybrid method has been proposed for image clustering based on combining the particle swarm optimization (PSO) with k-means clustering algorithms that uses the color and texture images as visual features to represent the images.
Abstract: In various application domains such as website, education, crime prevention, commerce, and biomedicine, the volume of digital data is increasing rapidly. The trouble appears when retrieving the data from the storage media because some of the existing methods compare the query image with all images in the database; as a result, the search space and computational complexity will increase, respectively. The content-based image retrieval (CBIR) methods aim to retrieve images accurately from large image databases similar to the query image based on the similarity between image features. In this study, a new hybrid method has been proposed for image clustering based on combining the particle swarm optimization (PSO) with k-means clustering algorithms. It is presented as a proposed CBIR method that uses the color and texture images as visual features to represent the images. The proposed method is based on four feature extractions for measuring the similarity, which are color histogram, color moment, co-occurrence matrices, and wavelet moment. The experimental results have indicated that the proposed system has a superior performance compared to the other system in terms of accuracy.
TL;DR: The result of the accuracy performance of different overlying block size are influenced by the diverse size of forged area, distance between two forged areas and threshold value used for the research.
Abstract: Since powerful editing software is easily accessible, manipulation on images is expedient and easy without leaving any noticeable evidences. Hence, it turns out to be a challenging chore to authenticate the genuineness of images as it is impossible for human's naked eye to distinguish between the tampered image and actual image. Among the most common methods extensively used to copy and paste regions within the same image in tampering image is the copy-move method. Discrete Cosine Transform (DCT) has the ability to detect tampered regions accurately. Nevertheless, in terms of precision (FP) and recall (FN), the block size of overlapping block influenced the performance. In this paper, the researchers implemented the copy-move image forgery detection using DCT coefficient. Firstly, by using the standard image conversion technique, RGB image is transformed into grayscale image. Consequently, grayscale image is segregated into overlying blocks of m × m pixels, m = 4.8. 2D DCT coefficients are calculated and reposition into a feature vector using zig-zag scanning in every block. Eventually, lexicographic sort is used to sort the feature vectors. Finally, the duplicated block is located by the Euclidean Distance. In order to gauge the performance of the copy-move detection techniques with various block sizes with respect to accuracy and storage, threshold D_similar = 0.1 and distance threshold (N)_d = 100 are used to implement the 10 input images in order. Consequently, 4 × 4 overlying block size had high false positive thus decreased the accuracy of forged detection in terms of accuracy. However, 8 × 8 overlying block accomplished more accurately for forged detection in terms of precision and recall as compared to 4 × 4 overlying block. In a nutshell, the result of the accuracy performance of different overlying block size are influenced by the diverse size of forged area, distance between two forged areas and threshold value used for the research.
TL;DR: Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina through vessel detection.
Abstract: With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art.
TL;DR: A new method to detect and segment nuclei to determine whether they are malignant or not is proposed and reveals the high performance and accuracy in comparison to the techniques reported in literature.
Abstract: Segmentation of objects from a noisy and complex image is still a challenging task that needs to be addressed. This article proposed a new method to detect and segment nuclei to determine whether they are malignant or not (determination of the region of interest, noise removal, enhance the image, candidate detection is employed on the centroid transform to evaluate the centroid of each object, the level set [LS] is applied to segment the nuclei). The proposed method consists of three main stages: preprocessing, seed detection, and segmentation. Preprocessing stage involves the preparation of the image conditions to ensure that they meet the segmentation requirements. Seed detection detects the seed point to be used in the segmentation stage, which refers to the process of segmenting the nuclei using the LS method. In this research work, 58 H&E breast cancer images from the UCSB Bio-Segmentation Benchmark dataset are evaluated. The proposed method reveals the high performance and accuracy in comparison to the techniques reported in literature. The experimental results are also harmonized with the ground truth images.
TL;DR: The test revealed that the proposed method is more robust than both least significant bit embedding and the original EMD, with a peak signal-to-noise ratio of 55.92 dB and payload of 52,428 bytes.
Abstract: The rapid growth of covert activities via communications network brought about an increasing need to provide an efficient method for data hiding to protect secret information from malicious attacks. One of the options is to combine two approaches, namely steganography and compression. However, its performance heavily relies on three major factors, payload, imperceptibility, and robustness, which are always in trade-offs. Thus, this study aims to hide a large amount of secret message inside a grayscale host image without sacrificing its quality and robustness. To realize the goal, a new two-tier data hiding technique is proposed that integrates an improved exploiting modification direction (EMD) method and Huffman coding. First, a secret message of an arbitrary plain text of characters is compressed and transformed into streams of bits; each character is compressed into a maximum of 5 bits per stream. The stream is then divided into two parts of different sizes of 3 and 2 bits. Subsequently, each part is transformed into its decimal value, which serves as a secret code. Second, a cover image is partitioned into groups of 5 pixels based on the original EMD method. Then, an enhancement is introduced by dividing the group into two parts, namely k1 and k2, which consist of 3 and 2 pixels, respectively. Furthermore, several groups are randomly selected for embedding purposes to increase the security. Then, for each selected group, each part is embedded with its corresponding secret code by modifying one grayscale value at most to hide the code in a (2ki + 1)-ary notational system. The process is repeated until a stego-image is eventually produced. Finally, the x2 test, which is considered one of the most severe attacks, is applied against the stego-image to evaluate the performance of the proposed method in terms of its robustness. The test revealed that the proposed method is more robust than both least significant bit embedding and the original EMD. Additionally, in terms of imperceptibility and capacity, the experimental results have also shown that the proposed method outperformed both the well-known methods, namely original EMD and optimized EMD, with a peak signal-to-noise ratio of 55.92 dB and payload of 52,428 bytes.
01 Jan 2016
TL;DR: The using multivariate statistics is universally compatible with any devices to read, allowing you to get the most less latency time to download any of the authors' books like this one.
Abstract: Thank you for downloading using multivariate statistics. As you may know, people have look hundreds times for their favorite novels like this using multivariate statistics, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some harmful bugs inside their laptop. using multivariate statistics is available in our digital library an online access to it is set as public so you can download it instantly. Our books collection saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the using multivariate statistics is universally compatible with any devices to read.
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
01 Jan 1999
TL;DR: Several mechanisms for marking documents and several other mechanisms for decoding the marks after documents have been subjected to common types of distortion are described and compared.
Abstract: Each copy of a text document can be made different in a nearly invisible way by repositioning or modifying the appearance of different elements of text, i.e., lines, words, or characters. A unique copy can be registered with its recipient, so that subsequent unauthorized copies that are retrieved can be traced back to the original owner. In this paper we describe and compare several mechanisms for marking documents and several other mechanisms for decoding the marks after documents have been subjected to common types of distortion. The marks are intended to protect documents of limited value that are owned by individuals who would rather possess a legal than an illegal copy if they can be distinguished. We will describe attacks that remove the marks and countermeasures to those attacks. An architecture is described for distributing a large number of copies without burdening the publisher with creating and transmitting the unique documents. The architecture also allows the publisher to determine the identity of a recipient who has illegally redistributed the document, without compromising the privacy of individuals who are not operating illegally. Two experimental systems are described. One was used to distribute an issue of the IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, and the second was used to mark copies of company private memoranda.
TL;DR: This is a foundational study that formalises and categorises the existing usage of AR and VR in the construction industry and provides a roadmap to guide future research efforts.
Abstract: This paper presents a study on the usage landscape of augmented reality (AR) and virtual reality (VR) in the architecture, engineering and construction sectors, and proposes a research agenda to address the existing gaps in required capabilities. A series of exploratory workshops and questionnaires were conducted with the participation of 54 experts from 36 organisations from industry and academia. Based on the data collected from the workshops, six AR and VR use-cases were defined: stakeholder engagement, design support, design review, construction support, operations and management support, and training. Three main research categories for a future research agenda have been proposed, i.e.: (i) engineeringgrade devices, which encompasses research that enables robust devices that can be used in practice, e.g. the rough and complex conditions of construction sites; (ii) workflow and data management; to effectively manage data and processes required by AR and VR technologies; and (iii) new capabilities; which includes new research required that will add new features that are necessary for the specific construction industry demands. This study provides essential information for practitioners to inform adoption decisions. To researchers, it provides a research road map to inform their future research efforts. This is a foundational study that formalises and categorises the existing usage of AR and VR in the construction industry and provides a roadmap to guide future research efforts.
TL;DR: A robust segmentation and deep learning techniques with the convolutional neural network are used to train the model on the bone marrow images to achieve accurate classification results, and experimental results reveal that the proposed method achieved 97.78% accuracy.
Abstract: Acute Leukemia is a life-threatening disease common both in children and adults that can lead to death if left untreated. Acute Lymphoblastic Leukemia (ALL) spreads out in children's bodies rapidly and takes the life within a few weeks. To diagnose ALL, the hematologists perform blood and bone marrow examination. Manual blood testing techniques that have been used since long time are often slow and come out with the less accurate diagnosis. This work improves the diagnosis of ALL with a computer-aided system, which yields accurate result by using image processing and deep learning techniques. This research proposed a method for the classification of ALL into its subtypes and reactive bone marrow (normal) in stained bone marrow images. A robust segmentation and deep learning techniques with the convolutional neural network are used to train the model on the bone marrow images to achieve accurate classification results. Experimental results thus obtained and compared with the results of other classifiers Naive Bayesian, KNN, and SVM. Experimental results reveal that the proposed method achieved 97.78% accuracy. The obtained results exhibit that the proposed approach could be used as a tool to diagnose Acute Lymphoblastic Leukemia and its sub-types that will definitely assist pathologists.