scispace - formally typeset
Search or ask a question
Author

Shuihua Wang

Bio: Shuihua Wang is an academic researcher from University of Leicester. The author has contributed to research in topics: Computer science & Artificial intelligence. The author has an hindex of 61, co-authored 275 publications receiving 10491 citations. Previous affiliations of Shuihua Wang include Loughborough University & Zhejiang University.


Papers
More filters
Journal ArticleDOI
TL;DR: This survey presented a comprehensive investigation of PSO, including its modifications, extensions, and applications to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology.
Abstract: Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms.

836 citations

Journal ArticleDOI
TL;DR: A novel spam detection method that focused on reducing the false positive error of mislabeling nonspam as spam, which demonstrated the MBPSO is superior to GA, RSA, PSO, and BPSO in terms of classification performance and wrappers are more effective than filters with regard to classification performance indexes.
Abstract: In this paper, we proposed a novel spam detection method that focused on reducing the false positive error of mislabeling nonspam as spam. First, we used the wrapper-based feature selection method to extract crucial features. Second, the decision tree was chosen as the classifier model with C4.5 as the training algorithm. Third, the cost matrix was introduced to give different weights to two error types, i.e., the false positive and the false negative errors. We define the weight parameter as a to adjust the relative importance of the two error types. Fourth, K-fold cross validation was employed to reduce out-of-sample error. Finally, the binary PSO with mutation operator (MBPSO) was used as the subset search strategy. Our experimental dataset contains 6000 emails, which were collected during the year of 2012. We conducted a Kolmogorov–Smirnov hypothesis test on the capital-run-length related features and found that all the p values were less than 0.001. Afterwards, we found a = 7 was the most appropriate in our model. Among seven meta-heuristic algorithms, we demonstrated the MBPSO is superior to GA, RSA, PSO, and BPSO in terms of classification performance. The sensitivity, specificity, and accuracy of the decision tree with feature selection by MBPSO were 91.02%, 97.51%, and 94.27%, respectively. We also compared the MBPSO with conventional feature selection methods such as SFS and SBS. The results showed that the MBPSO performs better than SFS and SBS. We also demonstrated that wrappers are more effective than filters with regard to classification performance indexes. It was clearly shown that the proposed method is effective, and it can reduce the false positive error without compromising the sensitivity and accuracy values.

372 citations

Journal ArticleDOI
TL;DR: This paper presents a neural network (NN) based method to classify a given MR brain image as normal or abnormal, which first employs wavelet transform to extract features from images, and then applies the technique of principle component analysis (PCA) to reduce the dimensions of features.
Abstract: Automated and accurate classification of MR brain images is of importance for the analysis and interpretation of these images and many methods have been proposed. In this paper, we present a neural network (NN) based method to classify a given MR brain image as normal or abnormal. This method first employs wavelet transform to extract features from images, and then applies the technique of principle component analysis (PCA) to reduce the dimensions of features. The reduced features are sent to a back propagation (BP) NN, with which scaled conjugate gradient (SCG) is adopted to find the optimal weights of the NN. We applied this method on 66 images (18 normal, 48 abnormal). The classification accuracies on both training and test images are 100%, and the computation time per image is only 0.0451s.

318 citations

Journal ArticleDOI
TL;DR: This study designed and validated a 13-layer convolutional neural network (CNN) that is effective in image-based fruit classification and observed using data augmentation can increase the overall accuracy.
Abstract: Fruit category identification is important in factories, supermarkets, and other fields Current computer vision systems used handcrafted features, and did not get good results In this study, our team designed a 13-layer convolutional neural network (CNN) Three types of data augmentation method was used: image rotation, Gamma correction, and noise injection We also compared max pooling with average pooling The stochastic gradient descent with momentum was used to train the CNN with minibatch size of 128 The overall accuracy of our method is 9494%, at least 5 percentage points higher than state-of-the-art approaches We validated this 13-layer is the optimal structure The GPU can achieve a 177? acceleration on training data, and a 175? acceleration on test data We observed using data augmentation can increase the overall accuracy Our method is effective in image-based fruit classification

292 citations

Journal ArticleDOI
TL;DR: A hybrid classification method based on fitness-scaled chaotic artificial bee colony (FSCABC) algorithm and feedforward neural network (FNN) was seen to be effective in classifying fruits.

272 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work presents a comprehensive survey of the advances with ABC and its applications and it is hoped that this survey would be very beneficial for the researchers studying on SI, particularly ABC algorithm.
Abstract: Swarm intelligence (SI) is briefly defined as the collective behaviour of decentralized and self-organized swarms. The well known examples for these swarms are bird flocks, fish schools and the colony of social insects such as termites, ants and bees. In 1990s, especially two approaches based on ant colony and on fish schooling/bird flocking introduced have highly attracted the interest of researchers. Although the self-organization features are required by SI are strongly and clearly seen in honey bee colonies, unfortunately the researchers have recently started to be interested in the behaviour of these swarm systems to describe new intelligent approaches, especially from the beginning of 2000s. During a decade, several algorithms have been developed depending on different intelligent behaviours of honey bee swarms. Among those, artificial bee colony (ABC) is the one which has been most widely studied on and applied to solve the real world problems, so far. Day by day the number of researchers being interested in ABC algorithm increases rapidly. This work presents a comprehensive survey of the advances with ABC and its applications. It is hoped that this survey would be very beneficial for the researchers studying on SI, particularly ABC algorithm.

1,645 citations

Journal ArticleDOI
TL;DR: In this paper, a comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field is provided, and the challenges and suggested solutions to help researchers understand the existing research gaps.
Abstract: In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.

1,084 citations

Journal ArticleDOI
TL;DR: Results prove the capability of the proposed binary version of grey wolf optimization (bGWO) to search the feature space for optimal feature combinations regardless of the initialization and the used stochastic operators.

958 citations