scispace - formally typeset
Search or ask a question
Author

Frans Van Den Bergh

Bio: Frans Van Den Bergh is an academic researcher. The author has contributed to research in topics: Multi-swarm optimization & Benchmark (computing). The author has an hindex of 1, co-authored 1 publications receiving 1463 citations.

Papers
More filters
Dissertation
01 Jan 2002
TL;DR: This thesis presents a theoretical model that can be used to describe the long-term behaviour of the Particle Swarm Optimiser and results are presented to support the theoretical properties predicted by the various models, using synthetic benchmark functions to investigate specific properties.
Abstract: Many scientific, engineering and economic problems involve the optimisation of a set of parameters. These problems include examples like minimising the losses in a power grid by finding the optimal configuration of the components, or training a neural network to recognise images of people's faces. Numerous optimisation algorithms have been proposed to solve these problems, with varying degrees of success. The Particle Swarm Optimiser (PSO) is a relatively new technique that has been empirically shown to perform well on many of these optimisation problems. This thesis presents a theoretical model that can be used to describe the long-term behaviour of the algorithm. An enhanced version of the Particle Swarm Optimiser is constructed and shown to have guaranteed convergence on local minima. This algorithm is extended further, resulting in an algorithm with guaranteed convergence on global minima. A model for constructing cooperative PSO algorithms is developed, resulting in the introduction of two new PSO-based algorithms. Empirical results are presented to support the theoretical properties predicted by the various models, using synthetic benchmark functions to investigate specific properties. The various PSO-based algorithms are then applied to the task of training neural networks, corroborating the results obtained on the synthetic benchmark functions.

1,498 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: An approach in which Pareto dominance is incorporated into particle swarm optimization (PSO) in order to allow this heuristic to handle problems with several objective functions and indicates that the approach is highly competitive and that can be considered a viable alternative to solve multiobjective optimization problems.
Abstract: This paper presents an approach in which Pareto dominance is incorporated into particle swarm optimization (PSO) in order to allow this heuristic to handle problems with several objective functions. Unlike other current proposals to extend PSO to solve multiobjective optimization problems, our algorithm uses a secondary (i.e., external) repository of particles that is later used by other particles to guide their own flight. We also incorporate a special mutation operator that enriches the exploratory capabilities of our algorithm. The proposed approach is validated using several test functions and metrics taken from the standard literature on evolutionary multiobjective optimization. Results indicate that the approach is highly competitive and that can be considered a viable alternative to solve multiobjective optimization problems.

3,474 citations

Journal ArticleDOI
TL;DR: A variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm.
Abstract: The particle swarm optimizer (PSO) is a stochastic, population-based optimization technique that can be applied to a wide range of problems, including neural network training. This paper presents a variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm. This is achieved by using multiple swarms to optimize different components of the solution vector cooperatively. Application of the new PSO algorithm on several benchmark optimization problems shows a marked improvement in performance over the traditional PSO.

2,038 citations

Journal ArticleDOI
TL;DR: This paper presents a comprehensive review of the vari- ous MOPSOs reported in the specialized literature, and includes a classification of the approaches, and identifies the main features of each proposal.
Abstract: The success of the Particle Swarm Optimiza- tion (PSO) algorithm as a single-objective optimizer (mainly when dealing with continuous search spaces) has motivated re- searchers to extend the use of this bio-inspired technique to other areas. One of them is multi-objective optimization. De- spite the fact that the first proposal of a Multi-Objective Par- ticle Swarm Optimizer (MOPSO) is over six years old, a con- siderable number of other algorithms have been proposed since then. This paper presents a comprehensive review of the vari- ous MOPSOs reported in the specialized literature. As part of this review, we include a classification of the approaches, and we identify the main features of each proposal. In the last part of the paper, we list some of the topics within this field that we consider as promising areas of future research.

1,314 citations

Book
24 Feb 2006
TL;DR: This work focuses on the optimization of particle Swarm Optimization for TRIBES or co-operation of tribes with a focus on the dynamics of a swarm.
Abstract: Foreword. Introduction. Part 1: Particle Swarm Optimization. Chapter 1. What is a difficult problem? Chapter 2. On a table corner. Chapter 3. First formulations. Chapter 4. Benchmark set. Chapter 5. Mistrusting chance. Chapter 6. First results. Chapter 7. Swarm: memory and influence graphs. Chapter 8. Distributions of proximity. Chapter 9. Optimal parameter settings. Chapter 10. Adaptations. Chapter 11. TRIBES or co-operation of tribes. Chapter 12. On the constraints. Chapter 13. Problems and applications. Chapter 14. Conclusion. Part 2: Outlines. Chapter 15. On parallelism. Chapter 16. Combinatorial problems. Chapter 17. Dynamics of a swarm. Chapter 18. Techniques and alternatives. Further Information. Bibliography. Index.

1,293 citations