scispace - formally typeset
Search or ask a question
Book ChapterDOI

Particle Swarm Optimization : a tutorial

TL;DR: The basic background needed to understand and implement the PSO algorithm is highlighted and how the particles are moved in the search space to find the optimal or near optimal solution is illustrated.
Abstract: Optimization algorithms are necessary to solve many problems such as parameter tuning. Particle Swarm Optimization (PSO) is one of these optimization algorithms. The aim of PSO is to search for the optimal solution in the search space. This paper highlights the basic background needed to understand and implement the PSO algorithm. This paper starts with basic definitions of the PSO algorithm and how the particles are moved in the search space to find the optimal or near optimal solution. Moreover, a numerical example is illustrated to show how the particles are moved in a convex optimization problem. Another numerical example is illustrated to show how the PSO trapped in a local minima problem. Two experiments are conducted to show how the PSO searches for the optimal parameters in one-dimensional and two-dimensional spaces to solve machine learning problems.

Summary (3 min read)

INTRODUCTION

  • Swarm optimization techniques are recent techniques used to solve many optimization problems.
  • The optimization is usually used in different fields such as economics, physics, and engineering where the main aim is to achieve maximum production, efficiency, or some other measure.
  • In addition, there are many scientific, social, and commercial problems which have various parameters which if they have been adjusted can produce a more desirable outcome.

A Tutorial

  • On the contrary, stochastic methods employ stochastic optimization algorithms and it achieved different results due to randomness.
  • In the exploration phase, the goal is to explore the search space to search for the optimal or near optimal solutions.
  • The optimization algorithms should make a balance between a random selection and greedy selection to bias the search towards better solutions, i.e. exploitation, while exploring the search space to find different solutions, i.e. exploration (Yamany et al., 2015a, 2015b; Tharwat et al., 2015c, 2015a, Tharwat et al., 2016d).
  • First section summarizes the related work of different applications that used PSO algorithm.

PARTICLE SWARM OPTIMIZATION (PSO)

  • PSO algorithm was discovered by Reynolds and Heppner, and the algorithm was simulated by Kennedy and Eberhart (Heppner & Grenander, 1990; Reynolds, 1987; Eberhart & Kennedy, 1995).
  • During the searching process, the current positions of all particles are evaluated using the fitness function.
  • A large value of V max expands the search area; thus, the particles may move away from the best solution and it cannot converge correctly to the optimal solution.
  • The positions and velocity of all particles are changed iteratively until it reaches a predefined stopping criterion (Eberhart & Kennedy, 1995; Kennedy, 2010).
  • The details of the PSO algorithm are summarized in Algorithm (1).

Numerical Examples

  • Two numerical examples were illustrated to show how the particles are moved in the search space to find the optimal solution.
  • In the second example, the problem of local minimum is simulated to show how the particles are trapped in one or more local solutions instead of the global solution.

First Example: PSO Example

  • The PSO algorithm is explained in details to show how the particles are moved and also to show the influence of each parameter on the PSO algorithm.
  • Figure 2 shows the surface and contour plot of the De Jong function.
  • At the beginning, the particles were moved to the new positions and their fitness values were calculated as in Table 4.
  • As shown in this iteration, the velocities of all particles were not zero; in other words, the particles will be moved in the next iterations.

Discussion

  • Figure 8 shows the convergence of the particles in each run.
  • As shown, the five runs converged to different optimal solutions, and all of them did not reach the optimal solution.
  • Therefore, in each run, the PSO algorithm achieved different optimal value.
  • As shown, the convergence rate depends mainly on the initial solutions.
  • Approximately all the recent optimization algorithms solved this problem but they do not guarantee to reach to the same solution in each run due to the stochastic nature of the optimization algorithms.

Second Example: Local Optima Problem

  • A numerical example is explained to explain the local optima problem of the PSO algorithm.
  • Rastrigin function (see Equation 4) was used as a fitness function.
  • As shown, the function is not convex function and it has many local optimal solutions and the optimal solution is located at the origin and the optimal value is zero.
  • Moreover, the limits of the search space for both x and y dimensions are bounded by -5.12 and 5.12.

PSO with Different Runs

  • On the contrary to the first example, the authors will not explain each step in the PSO algorithm because it is already explained in the first example.
  • Instead, in this example, the PSO algorithm has been run five times.
  • The particles’ positions are then iteratively moved using the PSO algorithm.
  • The maximum number of iterations was 20.
  • Particle Swarm Optimization Particle Swarm Optimization 628 Particle Swarm Optimization.

EXPERIMENTAL RESULTS AND DISCUSSION

  • Two experiments were conducted to show how the PSO algorithm is used to solve machine learning problems.
  • In the first experiment, PSO was used to find the optimal value for the k parameter in the k-Nearest Neighbor (k-NN) classifier.
  • The PSO searches for the optimal value in one-dimensional space.
  • In the second experiment, the PSO algorithm was used to search for the penalty and kernel parameters of the SVM classifiers.
  • The PSO searched in two-dimensional search space.

Experimental Setup

  • These datasets are widely used to compare the performance of different classification problems in the literature.
  • As shown in Table 6, all the datasets have only two classes.
  • In k-fold cross-validation, the original samples of the dataset were randomly partitioned into k subsets of equal size and the experiment is run k times.
  • The average of the k results from the folds can then be calculated to produce a single estimation.

Experimental Scenarios

  • The first experiment was conducted to compare the pro- posed PSO-SVM algorithm with Genetic Algorithm (GA) algorithm.
  • Table 7 shows the results of this experiment.
  • As shown in the table, the PSO algorithm achieved results better than GA.
  • Moreover, in terms of the CPU time, the PSO algorithm required CPU time lower than the GA algorithm.
  • In the second experiment, PSO algorithm was used to optimize SVM classifier.

CONCLUSION

  • Optimization algorithms are used recently in many applications.
  • Particle Swarm Optimization (PSO) is one of the well-known algorithms.
  • This paper explains the steps of calculating the PSO algorithm, and how the particles are converged to the optimal solution.
  • In addition, two numerical examples were given and graphically illustrated to explain the steps of PSO algorithm and to focus on the local optima problem and how PSO algorithm trapped in local minima problem.
  • Moreover, using standard datasets, two experiments were conducted to show how the PSO algorithm is used to optimize k-NN classifier, where the search space in one-dimensional, and how the PSO optimize SVM where the search space is two-dimensional space.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Particle Swarm Optimization : a tutorial
Tharwat, A, Gaber, T, Hassanien, AE and Elnaghi, BE
http://dx.doi.org/10.4018/978-1-5225-2229-4.ch026
Title Particle Swarm Optimization : a tutorial
Authors Tharwat, A, Gaber, T, Hassanien, AE and Elnaghi, BE
Publication title Handbook of Research on Machine Learning Innovations and Trends
Publisher IGI Global
Type Book Section
USIR URL This version is available at: http://usir.salford.ac.uk/id/eprint/61013/
Published Date 2017
USIR is a digital collection of the research output of the University of Salford. Where copyright
permits, full text material held in the repository is made freely available online and can be read,
downloaded and copied for non-commercial private study or research purposes. Please check the
manuscript for any further copyright restrictions.
For more information, including our policy and submission procedure, please
contact the Repository Team at: library-research@salford.ac.uk.

Handbook of Research
on Machine Learning
Innovations and Trends
Aboul Ella Hassanien
Cairo University, Egypt
Tarek Gaber
Suez Canal University, Egypt
A volume in the Advances in Computational
Intelligence and Robotics (ACIR) Book Series

Published in the United States of America by
IGI Global
Information Science Reference (an imprint of IGI Global)
701 E. Chocolate Avenue
Hershey PA, USA 17033
Tel: 717-533-8845
Fax: 717-533-8661
E-mail: cust@igi-global.com
Web site: http://www.igi-global.com
Copyright © 2017 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in
any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher.
Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or
companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark.
Library of Congress Cataloging-in-Publication Data
British Cataloguing in Publication Data
A Cataloguing in Publication record for this book is available from the British Library.
All work contributed to this book is new, previously-unpublished material. The views expressed in this book are those of the
authors, but not necessarily of the publisher.
For electronic access to this publication, please contact: eresources@igi-global.com.
Names: Hassanien, Aboul Ella, editor. | Gaber, Tarek, 1975- editor.
Title: Handbook of research on machine learning innovations and trends /
Aboul Ella Hassanien and Tarek Gaber, editors.
Description: Hershey, PA : Information Science Reference, [2017] | Includes
bibliographical references and index.
Identifiers: LCCN 2016056940| ISBN 9781522522294 (hardcover) | ISBN
9781522522300 (ebook)
Subjects: LCSH: Machine learning--Technological innovations. | Machine
learning--Industrial applications.
Classification: LCC Q325.5 .H3624 2017 | DDC 006.3/1--dc23 LC record available at https://lccn.loc.gov/2016056940
This book is published in the IGI Global book series Advances in Computational Intelligence and Robotics (ACIR) (ISSN:
2327-0411; eISSN: 2327-042X)

Advances in Computational
Intelligence and Robotics
(ACIR) Book Series
While intelligence is traditionally a term applied to humans and human cognition, technology has pro-
gressed in such a way to allow for the development of intelligent systems able to simulate many human
traits. With this new era of simulated and artificial intelligence, much research is needed in order to
continue to advance the field and also to evaluate the ethical and societal concerns of the existence of
artificial life and machine learning.
The Advances in Computational Intelligence and Robotics (ACIR) Book Series encourages
scholarly discourse on all topics pertaining to evolutionary computing, artificial life, computational
intelligence, machine learning, and robotics. ACIR presents the latest research being conducted on di-
verse topics in intelligence technologies with the goal of advancing knowledge and applications in this
rapidly evolving field.
Mission
Ivan Giannoccaro
University of Salento, Italy
ISSN:2327-0411
EISSN:2327-042X
Synthetic Emotions
Robotics
Natural language processing
Brain Simulation
Fuzzy Systems
Automated Reasoning
Computational Logic
Artificial Life
Algorithmic Learning
Cognitive Informatics
Coverage
IGI Global is currently accepting manuscripts
for publication within this series. To submit a pro-
posal for a volume in this series, please contact our
Acquisition Editors at Acquisitions@igi-global.com
or visit: http://www.igi-global.com/publish/.
The Advances in Computational Intelligence and Robotics (ACIR) Book Series (ISSN 2327-0411) is published by IGI Global, 701 E.
Chocolate Avenue, Hershey, PA 17033-1240, USA, www.igi-global.com. This series is composed of titles available for purchase individually;
each title is edited to be contextually exclusive from any other title within the series. For pricing and ordering information please visit http://
www.igi-global.com/book-series/advances-computational-intelligence-robotics/73674. Postmaster: Send all address changes to above address.
Copyright © 2017 IGI Global. All rights, including translation in other languages reserved by the publisher. No part of this series may be
reproduced or used in any form or by any means – graphics, electronic, or mechanical, including photocopying, recording, taping, or informa-
tion and retrieval systems – without written permission from the publisher, except for non commercial, educational use, including classroom
teaching purposes. The views expressed in this series are those of the authors, but not necessarily of IGI Global.

Titles in this Series
For a list of additional titles in this series, please visit: www.igi-global.com/book-series
Handbook of Research on Soft Computing and Nature-Inspired Algorithms
Shishir K. Shandilya (Bansal Institute of Research and Technology, India) Smita Shandilya (Sagar Institute of
Research Technology and Science, India) Kusum Deep (Indian Institute of Technology Roorkee, India) and Atulya
K. Nagar (Liverpool Hope University, UK)
Information Science Reference copyright 2017 627pp H/C (ISBN: 9781522521280) US $280.00 (our price)
Membrane Computing for Distributed Control of Robotic Swarms Emerging Research and Opportunities
Andrei George Florea (Politehnica University of Bucharest, Romania) and Cătălin Buiu (Politehnica University
of Bucharest, Romania)
Information Science Reference copyright 2017 119pp H/C (ISBN: 9781522522805) US $160.00 (our price)
Recent Developments in Intelligent Nature-Inspired Computing
Srikanta Patnaik (SOA University, India)
Information Science Reference copyright 2017 264pp H/C (ISBN: 9781522523222) US $185.00 (our price)
Ubiquitous Machine Learning and Its Applications
Pradeep Kumar (Maulana Azad National Urdu University, India) and Arvind Tiwari (DIT University, India)
Information Science Reference copyright 2017 258pp H/C (ISBN: 9781522525455) US $185.00 (our price)
Advanced Image Processing Techniques and Applications
N. Suresh Kumar (VIT University, India) Arun Kumar Sangaiah (VIT University, India) M. Arun (VIT University,
India) and S. Anand (VIT University, India)
Information Science Reference copyright 2017 439pp H/C (ISBN: 9781522520535) US $290.00 (our price)
Advanced Research on Biologically Inspired Cognitive Architectures
Jordi Vallverdú (Universitat Autònoma de Barcelona, Spain) Manuel Mazzara (Innopolis University, Russia) Max
Talanov (Kazan Federal University, Russia) Salvatore Distefano (University of Messina, Italy & Kazan Federal
University, Russia) and Robert Lowe (University of Gothenburg, Sweden & University of Skövde, Sweden)
Information Science Reference copyright 2017 297pp H/C (ISBN: 9781522519478) US $195.00 (our price)
Theoretical and Practical Advancements for Fuzzy System Integration
Deng-Feng Li (Fuzhou University, China)
Information Science Reference copyright 2017 415pp H/C (ISBN: 9781522518488) US $200.00 (our price)
701 East Chocolate Avenue, Hershey, PA 17033, USA
Tel: 717-533-8845 x100 • Fax: 717-533-8661
E-Mail: cust@igi-global.com • www.igi-global.com

Citations
More filters
Journal ArticleDOI
TL;DR: A new model that uses the Genetic Algorithm (GA) to optimize the coverage requirements in WSNs to provide continuous monitoring of specified targets for longest possible time with limited energy resources is proposed.
Abstract: Recently, Wireless Sensor Networks (WSNs) are widely used for monitoring and tracking applications. Sensor mobility adds extra flexibility and greatly expands the application space. Due to the limited energy and battery lifetime for each sensor, it can remain active only for a limited amount of time. To avoid the drawbacks of the classical coverage model, especially if a sensor died, K -coverage model requires at least k sensor nodes monitor any target to consider it covered. This paper proposed a new model that uses the Genetic Algorithm (GA) to optimize the coverage requirements in WSNs to provide continuous monitoring of specified targets for longest possible time with limited energy resources. Moreover, we allow sensor nodes to move to appropriate positions to collect environmental information. Our model is based on the continuous and variable speed movement of mobile sensors to keep all targets under their cover all times. To further prove that our proposed model is better than other related work, a set of experiments in different working environments and a comparison with the most related work are conducted. The improvement that our proposed method achieved regarding the network lifetime was in a range of 26%–41.3% using stationary nodes while it was in a range of 29.3%–45.7% using mobile nodes. In addition, the network throughput is improved in a range of 13%–17.6%. Moreover, the running time to form the network structure and switch between nodes’ modes is reduced by 12%.

126 citations

Journal ArticleDOI
TL;DR: The proposed algorithm was compared with three well-known optimization algorithms: Multi-Objective Particle Swarm Optimization (MOPSO), Multi- Objective Ant Lion Optimizer (MOALO), and Non-dominated Sorting Genetic Algorithm version 2 (NSGA-II); and the obtained results show that the MOGOA algorithm is able to provide competitive results and outperform other algorithms.
Abstract: Grasshopper Optimization Algorithm (GOA) was modified in this paper, to optimize multi-objective problems, and the modified version is called Multi-Objective Grasshopper Optimization Algorithm (MOGOA). An external archive is integrated with the GOA for saving the Pareto optimal solutions. The archive is then employed for defining the social behavior of the GOA in the multi-objective search space. To evaluate and verify the effectiveness of the MOGOA, a set of standard unconstrained and constrained test functions are used. Moreover, the proposed algorithm was compared with three well-known optimization algorithms: Multi-Objective Particle Swarm Optimization (MOPSO), Multi-Objective Ant Lion Optimizer (MOALO), and Non-dominated Sorting Genetic Algorithm version 2 (NSGA-II); and the obtained results show that the MOGOA algorithm is able to provide competitive results and outperform other algorithms.

82 citations

Journal ArticleDOI
TL;DR: This paper proposes a social ski-driver (SSD) optimization algorithm which is inspired from different evolutionary optimization algorithms for optimizing the parameters of SVMs, with the aim of improving the classification performance.
Abstract: The parameters of support vector machines (SVMs) such as kernel parameters and the penalty parameter have a great influence on the accuracy and complexity of the classification models. In the past, different evolutionary optimization algorithms were employed for optimizing SVMs; in this paper, we propose a social ski-driver (SSD) optimization algorithm which is inspired from different evolutionary optimization algorithms for optimizing the parameters of SVMs, with the aim of improving the classification performance. To cope with the problem of imbalanced data which is one of the challenging problems for building robust classification models, the proposed algorithm (SSD-SVM) was enhanced to deal with imbalanced data. In this study, eight standard imbalanced datasets were used for testing our proposed algorithm. For verification, the results of the SSD-SVM algorithm are compared with grid search, which is a conventional method of searching parameter values, and particle swarm optimization (PSO). The experimental results show that the SSD-SVM algorithm is capable of finding near-optimal values of SVMs parameters. The results also demonstrated high classification performance compared to the PSO algorithm.

80 citations

Journal ArticleDOI
TL;DR: The Whale Optimization Algorithm (WOA) has been proposed to optimize the parameters of SVM, so that the classification error can be reduced and the experimental results proved that the proposed model achieved high sensitivity to all toxic effects.

71 citations

Journal ArticleDOI
TL;DR: Theoretical analysis and practical results prove that there is not any significant difference between the PSO-style algorithms regarding their performance.
Abstract: Optimization algorithms are widely employed for finding optimal solutions in many applications. Stochastic optimization algorithms including nature-inspired optimization algorithms are simple and easy to implement, and this is the reason why there is a growing interest in this research area. Recently, many nature-inspired optimization algorithms have been proposed for solving many optimization problems. Moreover, with the aim of improving the performance of optimization algorithms, some modifications were applied such as combining different algorithms and employing some sampling techniques for replacing critical parameters in the optimization algorithms. This paper compares five different widely used PSO-style optimization algorithms to investigate if there is a significant difference between them or not. Theoretically, we explain different PSO-style algorithms and discuss the similarities and differences between them. Practically, a number of experiments were conducted to compare these algorithms. Theoretical analysis and practical results prove that there is not any significant difference between the PSO-style algorithms regarding their performance.

37 citations

References
More filters
Proceedings ArticleDOI
04 Oct 1995
TL;DR: The optimization of nonlinear functions using particle swarm methodology is described and implementations of two paradigms are discussed and compared, including a recently developed locally oriented paradigm.
Abstract: The optimization of nonlinear functions using particle swarm methodology is described. Implementations of two paradigms are discussed and compared, including a recently developed locally oriented paradigm. Benchmark testing of both paradigms is described, and applications, including neural network training and robot task learning, are proposed. Relationships between particle swarm optimization and both artificial life and evolutionary computation are reviewed.

14,477 citations

Proceedings ArticleDOI
01 Aug 1987
TL;DR: In this article, an approach based on simulation as an alternative to scripting the paths of each bird individually is explored, with the simulated birds being the particles and the aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course.
Abstract: The aggregate motion of a flock of birds, a herd of land animals, or a school of fish is a beautiful and familiar part of the natural world. But this type of complex motion is rarely seen in computer animation. This paper explores an approach based on simulation as an alternative to scripting the paths of each bird individually. The simulated flock is an elaboration of a particle systems, with the simulated birds being the particles. The aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course. Each simulated bird is implemented as an independent actor that navigates according to its local perception of the dynamic environment, the laws of simulated physics that rule its motion, and a set of behaviors programmed into it by the "animator." The aggregate motion of the simulated flock is the result of the dense interaction of the relatively simple behaviors of the individual simulated birds.

7,365 citations

01 Jan 2010

6,571 citations

Dissertation
01 Jan 2002
TL;DR: This thesis presents a theoretical model that can be used to describe the long-term behaviour of the Particle Swarm Optimiser and results are presented to support the theoretical properties predicted by the various models, using synthetic benchmark functions to investigate specific properties.
Abstract: Many scientific, engineering and economic problems involve the optimisation of a set of parameters. These problems include examples like minimising the losses in a power grid by finding the optimal configuration of the components, or training a neural network to recognise images of people's faces. Numerous optimisation algorithms have been proposed to solve these problems, with varying degrees of success. The Particle Swarm Optimiser (PSO) is a relatively new technique that has been empirically shown to perform well on many of these optimisation problems. This thesis presents a theoretical model that can be used to describe the long-term behaviour of the algorithm. An enhanced version of the Particle Swarm Optimiser is constructed and shown to have guaranteed convergence on local minima. This algorithm is extended further, resulting in an algorithm with guaranteed convergence on global minima. A model for constructing cooperative PSO algorithms is developed, resulting in the introduction of two new PSO-based algorithms. Empirical results are presented to support the theoretical properties predicted by the various models, using synthetic benchmark functions to investigate specific properties. The various PSO-based algorithms are then applied to the task of training neural networks, corroborating the results obtained on the synthetic benchmark functions.

1,498 citations

Journal ArticleDOI
TL;DR: In this article, a particle swarm optimization (PSO) for reactive power and voltage control (volt/VAr control: VVC) considering voltage security assessment (VSA) is presented.
Abstract: Summary form only given, as follows. This paper presents a particle swarm optimization (PSO) for reactive power and voltage control (volt/VAr control: VVC) considering voltage security assessment (VSA). VVC can be formulated as a mixed-integer nonlinear optimization problem (MINLP). The proposed method expands the original PSO to handle a MINLP and determines an online VVC strategy with continuous and discrete control variables such as automatic voltage regulator (AVR) operating values of generators, tap positions of on-load tap changer (OLTC) of transformers, and the number of reactive power compensation equipment. The method considers voltage security using a continuation power now and a contingency analysis technique. The feasibility of the proposed method is demonstrated and compared with reactive tabu search (RTS) and the enumeration method on practical power system models with promising results.

1,340 citations


"Particle Swarm Optimization : a tut..." refers background in this paper

  • ...…learning (Yamany et al., 2015a; Tharwat et al.,2015c; Ibrahim & Tharwat, 2014; Tharwat et al., 2015b, Tharwat et al., 2016a, Tharwat et al., 2016e), power electronics (Yoshida et al., 2000), and numerical problems (Vesterstrom & Thomsen, 2004), and mechanical problems (dos Santos Coelho, 2010)....

    [...]

Frequently Asked Questions (1)
Q1. What contributions have the authors mentioned in the paper "Particle swarm optimization : a tutorial" ?

In this paper, the particle swarm optimization ( PSO ) algorithm was used to solve the local optima problem and trapped in local minima problem.