scispace - formally typeset
Search or ask a question
Author

Ashraf M. Abdelbar

Bio: Ashraf M. Abdelbar is an academic researcher from Brandon University. The author has contributed to research in topics: Ant colony optimization algorithms & Artificial neural network. The author has an hindex of 16, co-authored 81 publications receiving 928 citations. Previous affiliations of Ashraf M. Abdelbar include University of Manitoba & American University in Cairo.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper shows that approximating MAPs with a constant ratio bound is also NP-hard, and applies to networks with constrained in-degree and out-degree, applies to randomized approximation, and even applies if the ratio bound, instead of being constant, is allowed to be a polynomial function of various aspects of the network topology.

118 citations

Proceedings ArticleDOI
27 Dec 2005
TL;DR: Fuzzy PSO as discussed by the authors is a generalization of standard PSO in which charisma is defined to be a fuzzy variable, and more than one particle in each neighborhood can have a nonzero degree of charisma, and consequently is allowed to influence others to a degree that depends on its charisma.
Abstract: In standard particle swarm optimization (PSO), the best particle in each neighborhood exerts its influence over other particles in the neighborhood. In this paper, we propose fuzzy PSO, a generalization which differs from standard PSO in the following respect: charisma is defined to be a fuzzy variable, and more than one particle in each neighborhood can have a non-zero degree of charisma, and, consequently, is allowed to influence others to a degree that depends on its charisma. We evaluate our model on the weighted maximum satisfiability (maxsat) problem, comparing performance to standard PSO and to Walk-Sat.

56 citations

Journal ArticleDOI
TL;DR: Five extensions to Ant-Miner are proposed, which incorporate stubborn ants, an ACO variation in which an ant is allowed to take into consideration its own personal past history and improve the algorithm’s performance in terms of predictive accuracy and simplicity of the generated rule set.
Abstract: Ant-Miner is an ant-based algorithm for the discovery of classification rules. This paper proposes five extensions to Ant-Miner: (1) we utilize multiple types of pheromone, one for each permitted rule class, i.e. an ant first selects the rule class and then deposits the corresponding type of pheromone; (2) we use a quality contrast intensifier to magnify the reward of high-quality rules and to penalize low-quality rules in terms of pheromone update; (3) we allow the use of a logical negation operator in the antecedents of constructed rules; (4) we incorporate stubborn ants, an ACO variation in which an ant is allowed to take into consideration its own personal past history; (5) we use an ant colony behavior in which each ant is allowed to have its own values of the α and β parameters (in a sense, to have its own personality). Empirical results on 23 datasets show improvements in the algorithm’s performance in terms of predictive accuracy and simplicity of the generated rule set.

49 citations

Journal ArticleDOI
01 Jan 2013
TL;DR: This paper proposes several extensions to cAnt-Miner based on the use of multiple pheromone types, one for each class value to be predicted, which improves classification accuracy to a statistically significant extent and has classification accuracy similar to the well-known Ripper and PART rule induction algorithms.
Abstract: The cAnt-Miner algorithm is an Ant Colony Optimization (ACO) based technique for classification rule discovery in problem domains which include continuous attributes. In this paper, we propose several extensions to cAnt-Miner. The main extension is based on the use of multiple pheromone types, one for each class value to be predicted. In the proposed @mcAnt-Miner algorithm, an ant first selects a class value to be the consequent of a rule and the terms in the antecedent are selected based on the pheromone levels of the selected class value; pheromone update occurs on the corresponding pheromone type of the class value. The pre-selection of a class value also allows the use of more precise measures for the heuristic function and the dynamic discretization of continuous attributes, and further allows for the use of a rule quality measure that directly takes into account the confidence of the rule. Experimental results on 20 benchmark datasets show that our proposed extension improves classification accuracy to a statistically significant extent compared to cAnt-Miner, and has classification accuracy similar to the well-known Ripper and PART rule induction algorithms.

42 citations

Book ChapterDOI
10 Sep 2014
TL;DR: This paper presents a novel Ant Colony Optimization (ACO) algorithm that optimizes neural network topology for a given dataset, and describes all the elements necessary to tackle the learning problem using ACO, and experimentally compares the classification performance of the optimized topologies produced.
Abstract: A re-occurring challenge in applying feed-forward neural networks to a new dataset is the need to manually tune the neural network topology. If one’s attention is restricted to fully-connected three-layer networks, then there is only the need to manually tune the number of neurons in the single hidden layer. In this paper, we present a novel Ant Colony Optimization (ACO) algorithm that optimizes neural network topology for a given dataset. Our algorithm is not restricted to three-layer networks, and can produce topologies that contain multiple hidden layers, and topologies that do not have full connectivity between successive layers. Our algorithm uses Backward Error Propagation (BP) as a subroutine, but it is possible, in general, to use any neural network learning algorithm within our ACO approach instead. We describe all the elements necessary to tackle our learning problem using ACO, and experimentally compare the classification performance of the optimized topologies produced by our ACO algorithm with the standard fully-connected three-layer network topology most-commonly used in the literature.

39 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2002

9,314 citations

Book
01 Jan 2003
TL;DR: This book discusses Bayesian Reasoning, Bayesian Network Applications, and Knowledge Engineering with Bayesian Networks I and II.
Abstract: Bayesian Reasoning. Introduction to Bayesian Networks. Inference in Bayesian Networks. Bayesian Network Applications. Bayesian Planning and Decision-Making. Bayesian Network Applications II. Learning Bayesian Networks I. Learning Bayesian Networks II. Causality vs. Probability. Knowledge Engineering with Bayesian Networks I. Knowledge Engineering with Bayesian Networks II. Application Software.

1,101 citations