scispace - formally typeset
Search or ask a question
Author

Ahmed T. Sadiq

Other affiliations: Al Maaref University College
Bio: Ahmed T. Sadiq is an academic researcher from University of Technology, Iraq. The author has contributed to research in topics: Computer science & Bees algorithm. The author has an hindex of 8, co-authored 41 publications receiving 142 citations. Previous affiliations of Ahmed T. Sadiq include Al Maaref University College.


Papers
More filters
Book ChapterDOI
15 Sep 2019
TL;DR: A supervised machine learning with two feature extraction techniques Term Frequency and Term Frequency-Inverse Document Frequency are used for the classification process of political web log posts and the linear kernel was deemed the most suitable for the model.
Abstract: In the recent years, the number of web logs, and the amount of opinionated data on the World Wide Web, have been grown substantially. The ability to determine the political orientation of an article automatically can be beneficial in many areas from academia to security. However, the sentiment classification of web log posts (political web log posts in particular), is apparently more complex than the sentiment classification of conventional text. In this paper, a supervised machine learning with two feature extraction techniques Term Frequency (TF) and Term Frequency-Inverse Document Frequency (TF-IDF) are used for the classification process. For investigation, SVM with four kernels for supervised machine learning have been employed. Subsequent to testing, the results reveal that the linear with TF achieved the results in accuracy of 91.935% also with TF-IDF achieved the 95.161%. The linear kernel was deemed the most suitable for our model.

16 citations

Proceedings ArticleDOI
26 Nov 2012
TL;DR: The proposed system consists of three main steps: document representation, classifier construction and performance evaluation, which has achieved good results up to 96%, when applied to a number of test documents for each sub-category of main categories.
Abstract: Text categorization is the task in which documents are classified into one or more of predefined categories based on their contents. This paper shows that the proposed system consists of three main steps: document representation, classifier construction and performance evaluation. In the first step, a set of pre-classified documents is provided. Input documents are initially pre-processed in order to be split into features and eliminate non-informative features. The remaining features are next weighted based on the frequency of each feature in that document and standardized by reducing a feature to its root using the stemming process. Due to the large number of features even after the non-informative features removal and the stemming process, the proposed system applies specific thresholds to extract distinct features which represent the input document. In the second step, the text categorization model (classifier) is built by learning the distinct features which represent all the pre-classified documents, this process can be achieved by using one of the supervised classification techniques that is called the rough set theory. The model uses a pair of precise concepts from the above theory that are called lower and upper approximations to classify any test document into one or more of main categories and sub-categories. In the final step, the performance of the proposed system is evaluated. It has achieved good results up to 96%, when applied to a number of test documents for each sub-category of main categories.

14 citations

Proceedings ArticleDOI
26 Apr 2017
TL;DR: Two new approaches of robot path planning in dynamic environments based on D∗ algorithm and particle swarm optimization are proposed, using Lbest PSO algorithm to move the robot from the start node through dynamic environment which contains dynamic obstacle moving in free space by finding the optimal path.
Abstract: This paper proposes two new approaches of robot path planning in dynamic environments based on D∗ algorithm and particle swarm optimization. Generally speaking, the grid method is used to decompose two-dimensional space to build class node which contains the information of the space environment. D∗ algorithm analyze the environment from the goal node and computes the cost for each node to the start node. In the first approach, Lbest PSO algorithm is used to move the robot from the start node through dynamic environment which contains dynamic obstacle moving in free space by finding and displaying the optimal path. In the second approach a method is developed to manipulate the gate raise state where the robot cannot pass this node unlike D∗ algorithm. Some experimental results are simulation in different dynamic environments, show that in second approach the robot reaches its target without colliding obstacles and finds the optimal path with minimum iterations, minimum total arc cost and minimum time occupy.

14 citations

Book ChapterDOI
15 Sep 2019
TL;DR: Naive Bayes is applied to analyse the opinions by exploring categories from a text and classified it to the right class (Reform, Conservative and Revolutionary) to investigate the effect of using two feature extraction methods i.e. Term Frequency and Term Frequency-Inverse Document Frequency on the accuracy of classifying Arabic articles.
Abstract: Sentiment analysis plays an important role in most of human activities and has a significant impact on our behaviours. With the development and use of web technology, there is a huge amount of data that represents users opinions in many areas such as politics and business. This paper applied Naive Bayes (NB) to analyse the opinions by exploring categories from a text and classified it to the right class (Reform, Conservative and Revolutionary). It investigates the effect of using two feature extraction i.e. Term Frequency (TF) and Term Frequency-Inverse Document Frequency (TF-IDF) methods with Naive Bayes classifiers (Gaussian, Multinomial, Complement and Bernoulli) on the accuracy of classifying Arabic articles. Precision, recall, F1-score and number of correct predict have been used to evaluate the performance of the applied classifiers. The results reveal that, using TF with TF-IDF improved the accuracy to 96.77%. The Complement was deemed the most suitable for our model.

13 citations

Posted Content
TL;DR: This paper shows that the proposed system has achieved good results up to 96%, when applied to a number of test text documents for each sub-category of main categories.
Abstract: Text categorization is the task in which text documents are classified into one or more of predefined categories based on their contents. This paper shows that the proposed system consists of three main steps: text document representation, classifier construction and performance evaluation. In the first step, a set of pre-classified text documents is provided. Each text document is initially preprocessed in order to be split into features, these features are weighted based on the frequency of each feature in that text document and eliminate the non-informative features. The remaining features are next standardized by reducing a feature to its root using the stemming process. Due to the large number of features even after the non-informative features removal and the stemming process, the proposed system applies specific thresholds to extract distinct features which represent that text document. In the second step, the text categorization model (classifier) is built by learning the distinct features which represent all the pre-classified text documents for each sub-category of main categories; this process can be achieved by using one of the supervised categorization techniques that is called the rough set theory. Thereafter, the model uses a pair of precise concepts from the above theory that are called the lower and upper approximations to classify any test text document into one or more of main categories and sub-categories. In the final step, the performance of the proposed system is evaluated. It has achieved good results up to 96%, when applied to a number of test text documents for each sub-category of main categories.

12 citations


Cited by
More filters
01 Jan 2016
TL;DR: The the uses of argument is universally compatible with any devices to read, and is available in the digital library an online access to it is set as public so you can download it instantly.
Abstract: Thank you very much for downloading the uses of argument. Maybe you have knowledge that, people have search numerous times for their chosen novels like this the uses of argument, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some malicious bugs inside their computer. the uses of argument is available in our digital library an online access to it is set as public so you can download it instantly. Our digital library hosts in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the the uses of argument is universally compatible with any devices to read.

1,180 citations

Journal ArticleDOI
TL;DR: Survey of text classification, process of different term weighing methods and comparison between different classification techniques are surveyed.
Abstract: Supervised machine learning studies are gaining more significant recently because of the availability of the increasing number of the electronic documents from different resources. Text classification can be defined that the task was automatically categorized a group documents into one or more predefined classes according to their subjects. Thereby, the major objective of text classification is to enable users for extracting information from textual resource and deals with process such as retrieval, classification, and machine learning techniques together in order to classify different pattern. In text classification technique, term weighting methods design suitable weights to the specific terms to enhance the text classification performance. This paper surveys of text classification, process of different term weighing methods and comparison between different classification techniques.

215 citations

Proceedings ArticleDOI
19 Mar 2015
TL;DR: A survey of ECG classification into arrhythmia types is presented and a detailed survey of preprocessing techniques, ECG databases, feature extraction techniques, ANN based classifiers, and performance measures to address the mentioned issues are presented.
Abstract: Classification of electrocardiogram (ECG) signals plays an important role in diagnoses of heart diseases An accurate ECG classification is a challenging problem This paper presents a survey of ECG classification into arrhythmia types Early and accurate detection of arrhythmia types is important in detecting heart diseases and choosing appropriate treatment for a patient Different classifiers are available for ECG classification Amongst all classifiers, artificial neural networks (ANNs) have become very popular and most widely used for ECG classification This paper discusses the issues involved in ECG classification and presents a detailed survey of preprocessing techniques, ECG databases, feature extraction techniques, ANN based classifiers, and performance measures to address the mentioned issues Furthermore, for each surveyed paper, our paper also presents detailed analysis of input beat selection and output of the classifiers

203 citations

Journal Article
TL;DR: The Rough Set Theory and its applications in web mining are understood to help in decision making and clustering the incomplete data and thus aiding in decisionMaking.
Abstract: Similar to data mining, three major web mining operations include clustering, association rule mining, and sequential analysis. Typical clustering operations in web mining involve finding natural groupings of web resources or web users. Researchers have found and pointed at some important and fundamental differences between clustering in conventional applications and clustering in web mining. Moreover, due to variety of reasons inherent in web browsing and web logging, the likelihood of bad and incomplete data is higher. This is where Rough Set Theory can play a crucial role and researchers have been utilizing this in clustering the incomplete data and thus aiding in decision making. This paper aims at understanding the Rough Set Theory and its applications in web mining.

57 citations

Journal ArticleDOI
11 Mar 2021
TL;DR: A comparison of various methods to classify the dataset with a fusion-based feature extraction method and a proposed framework performing preprocessing that consists of a filtering approach to remove noises from the raw data.
Abstract: Our human heart is classified into four sections called the left side and right side of the atrium and ventricle accordingly. Monitoring and taking care of the heart of every human is the very essential part. Therefore, the early prediction is essential to save and give awareness to humans about diet plan, lifestyle schedule. Also, this is used to improve the clinical diagnosis and treatment of any patients. To predict or identifying any cardiovascular problems, Electro Cardio Gram (ECG) is used to record the electrical signal of the heart from the body surface of humans. The algorithm learns the dataset from before cluster is called supervised; The algorithm learns to train the data from the set of a dataset is called unsupervised. Then the classification of more amount of heartbeat for different category of normal, abnormal, irregular heartbeats to detect cardiovascular diseases. In this research article, a comparison of various methods to classify the dataset with a fusion-based feature extraction method. Besides, our research work consists of a denoising filter to reconstruct the raw data from the original input. Our proposed framework performing preprocessing that consists of a filtering approach to remove noises from the raw data Journal of Artificial Intelligence and Capsule Networks (2021) Vol.03/ No.01 Pages: 1-16 http://irojournals.com/aicn/ DOI: https://doi.org/10.36548/jaicn.2021.1.001 2 ISSN: 2582-2012 (online) Submitted: 29.12.2020 Revised: 2.02.2021 Accepted: 25.02.2021 Published: 11.03.2021 set. The signal is affected by thermal noise and instrumentation noise, calibration noise due to power line fluctuation. This interference is high in many handheld devices which can be eliminated by de-noising filters. The output of the de-noising filter is input for fusion-based feature extraction and prediction model construction. This workflow progress has given good results of classifier effectiveness and imbalance arrangement conditions. We achieved good accuracy 96.5% and minimum computation time for classification of ECG signal.

57 citations