scispace - formally typeset
Search or ask a question
Author

Adam P. Piotrowski

Bio: Adam P. Piotrowski is an academic researcher from Polish Academy of Sciences. The author has contributed to research in topics: Differential evolution & Metaheuristic. The author has an hindex of 22, co-authored 44 publications receiving 1408 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Comparison of methods to prevent multi-layer perceptron neural networks from overfitting of the training data in the case of daily catchment runoff modelling shows that the elaborated noise injection method may prevent overfitting slightly better than the most popular early stopping approach.

198 citations

Journal ArticleDOI
TL;DR: Quite clear relation between the population size and the convergence speed has been found, showing that the fewer function calls are available, the lower population sizes perform better and which specific algorithms with population size adaptation perform better depends on the number of function calls allowed.
Abstract: Population size of Differential Evolution (DE) algorithms is often specified by user and remains fixed during run. During the first decade since the introduction of DE the opinion that its population size should be related to the problem dimensionality prevailed, later the approaches to DE population size setting diversified. In large number of recently introduced DE algorithms the population size is considered to be problem-independent and often fixed to 100 or 50 individuals, but alongside a number of DE variants with flexible population size have been proposed. The present paper briefly reviews the opinions regarding DE population size setting and verifies the impact of the population size on the performance of DE algorithms. Ten DE algorithms with fixed population size, each with at least five different population size settings, and four DE algorithms with flexible population size are tested on CEC2005 benchmarks and CEC2011 real-world problems. It is found that the inappropriate choice of the population size may severely hamper the performance of each DE algorithm. Although the best choice of the population size depends on the specific algorithm, number of allowed function calls and problem to be solved, some rough guidelines may be sketched. When the maximum number of function calls is set to classical values, i.e. those specified for CEC2005 and CEC2011 competitions, for low-dimensional problems (with dimensionality below 30) the population size equal to 100 individuals is suggested; population sizes smaller than 50 are rarely advised. For higher-dimensional artificial problems the population size should often depend on the problem dimensionality d and be set to 3d–5d. Unfortunately, setting proper population size for higher-dimensional real-world problems (d>40) turns out too problem and algorithm-dependent to give any general guide; 200 individuals may be a first guess, but many DE approaches would need a much different choice, ranging from 50 to 10d. However, quite clear relation between the population size and the convergence speed has been found, showing that the fewer function calls are available, the lower population sizes perform better. Based on the extensive experimental results the use of adaptive population size is highly recommended, especially for higher-dimensional and real-world problems. However, which specific algorithms with population size adaptation perform better depends on the number of function calls allowed.

188 citations

Journal ArticleDOI
TL;DR: Although results do differ for the specific PSO variants, for the majority of considered PSO algorithms the best performance is obtained with swarms composed of 70–500 particles, indicating that the classical choice is often too small.
Abstract: Particle Swarm Optimization (PSO) is among the most universally applied population-based metaheuristic optimization algorithms. PSO has been successfully used in various scientific fields, ranging from humanities, engineering, chemistry, medicine, to advanced physics. Since its introduction in 1995, the method has been widely investigated, which led to the development of hundreds of PSO versions and numerous theoretical and empirical findings on their convergence and parameterization. However, so far there is no detailed study on the proper choice of PSO swarm size, although it is widely known that population size crucially affects the performance of metaheuristics. In most applications, authors follow the initial suggestion from 1995 and restrict the population size to 20–50 particles. In this study, we relate the performance of eight PSO variants to swarm sizes that range from 3 up to 1000 particles. Tests are performed on sixty 10- to 100-dimensional scalable benchmarks and twenty-two 1- to 216-dimensional real-world problems. Although results do differ for the specific PSO variants, for the majority of considered PSO algorithms the best performance is obtained with swarms composed of 70–500 particles, indicating that the classical choice is often too small. Larger swarms frequently improve efficiency of the method for more difficult problems and practical applications. For unimodal problems slightly lower swarm sizes are recommended for the majority of PSO variants, but some would still perform best with hundreds of particles.

171 citations

Journal ArticleDOI
TL;DR: The overall performance of the Levenberg–Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other Evolutionary Computation-based algorithms.

104 citations

Journal ArticleDOI
TL;DR: A novel DE algorithm is proposed, in which three among the most efficient concepts already applied separately within DE framework are gathered together, namely: the adaptation of algorithm control parameters and probabilities of using different mutation strategies, and the use of Nelder-Mead algorithm as a local search method hybridized with DE.

99 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: It is found that it is a high time to provide a critical review of the latest literatures published and also to point out some important future avenues of research on DE.
Abstract: Differential Evolution (DE) is arguably one of the most powerful and versatile evolutionary optimizers for the continuous parameter spaces in recent times. Almost 5 years have passed since the first comprehensive survey article was published on DE by Das and Suganthan in 2011. Several developments have been reported on various aspects of the algorithm in these 5 years and the research on and with DE have now reached an impressive state. Considering the huge progress of research with DE and its applications in diverse domains of science and technology, we find that it is a high time to provide a critical review of the latest literatures published and also to point out some important future avenues of research. The purpose of this paper is to summarize and organize the information on these current developments on DE. Beginning with a comprehensive foundation of the basic DE family of algorithms, we proceed through the recent proposals on parameter adaptation of DE, DE-based single-objective global optimizers, DE adopted for various optimization scenarios including constrained, large-scale, multi-objective, multi-modal and dynamic optimization, hybridization of DE with other optimizers, and also the multi-faceted literature on applications of DE. The paper also presents a dozen of interesting open problems and future research issues on DE.

1,265 citations

Journal ArticleDOI
TL;DR: Despite a significant amount of research activity on the use of ANNs for prediction and forecasting of water resources variables in river systems, little of this is focused on methodological issues and there is still a need for the development of robust ANN model development approaches.
Abstract: Over the past 15 years, artificial neural networks (ANNs) have been used increasingly for prediction and forecasting in water resources and environmental engineering. However, despite this high level of research activity, methods for developing ANN models are not yet well established. In this paper, the steps in the development of ANN models are outlined and taxonomies of approaches are introduced for each of these steps. In order to obtain a snapshot of current practice, ANN development methods are assessed based on these taxonomies for 210 journal papers that were published from 1999 to 2007 and focus on the prediction of water resource variables in river systems. The results obtained indicate that the vast majority of studies focus on flow prediction, with very few applications to water quality. Methods used for determining model inputs, appropriate data subsets and the best model structure are generally obtained in an ad-hoc fashion and require further attention. Although multilayer perceptrons are still the most popular model architecture, other model architectures are also used extensively. In relation to model calibration, gradient based methods are used almost exclusively. In conclusion, despite a significant amount of research activity on the use of ANNs for prediction and forecasting of water resources variables in river systems, little of this is focused on methodological issues. Consequently, there is still a need for the development of robust ANN model development approaches.

730 citations

Journal ArticleDOI
TL;DR: Six learning algorithms including biogeography-based optimization, particle swarm optimization, genetic algorithm, ant colony optimization, evolutionary strategy, and population-based incremental learning are used to train a new dendritic neuron model (DNM) and are suggested to make DNM more powerful in solving classification, approximation, and prediction problems.
Abstract: An artificial neural network (ANN) that mimics the information processing mechanisms and procedures of neurons in human brains has achieved a great success in many fields, e.g., classification, prediction, and control. However, traditional ANNs suffer from many problems, such as the hard understanding problem, the slow and difficult training problems, and the difficulty to scale them up. These problems motivate us to develop a new dendritic neuron model (DNM) by considering the nonlinearity of synapses, not only for a better understanding of a biological neuronal system, but also for providing a more useful method for solving practical problems. To achieve its better performance for solving problems, six learning algorithms including biogeography-based optimization, particle swarm optimization, genetic algorithm, ant colony optimization, evolutionary strategy, and population-based incremental learning are for the first time used to train it. The best combination of its user-defined parameters has been systemically investigated by using the Taguchi’s experimental design method. The experiments on 14 different problems involving classification, approximation, and prediction are conducted by using a multilayer perceptron and the proposed DNM. The results suggest that the proposed learning algorithms are effective and promising for training DNM and thus make DNM more powerful in solving classification, approximation, and prediction problems.

517 citations

Journal ArticleDOI
TL;DR: This study attempts to go beyond the traps of metaphors and introduce a novel metaphor-free population-based optimization based on the mathematical foundations and ideas of the Runge Kutta (RK) method widely well-known in mathematics.
Abstract: The optimization field suffers from the metaphor-based “pseudo-novel” or “fancy” optimizers. Most of these cliche methods mimic animals' searching trends and possess a small contribution to the optimization process itself. Most of these cliche methods suffer from the locally efficient performance, biased verification methods on easy problems, and high similarity between their components' interactions. This study attempts to go beyond the traps of metaphors and introduce a novel metaphor-free population-based optimization method based on the mathematical foundations and ideas of the Runge Kutta (RK) method widely well-known in mathematics. The proposed RUNge Kutta optimizer (RUN) was developed to deal with various types of optimization problems in the future. The RUN utilizes the logic of slope variations computed by the RK method as a promising and logical searching mechanism for global optimization. This search mechanism benefits from two active exploration and exploitation phases for exploring the promising regions in the feature space and constructive movement toward the global best solution. Furthermore, an enhanced solution quality (ESQ) mechanism is employed to avoid the local optimal solutions and increase convergence speed. The RUN algorithm's efficiency was evaluated by comparing with other metaheuristic algorithms in 50 mathematical test functions and four real-world engineering problems. The RUN provided very promising and competitive results, showing superior exploration and exploitation tendencies, fast convergence rate, and local optima avoidance. In optimizing the constrained engineering problems, the metaphor-free RUN demonstrated its suitable performance as well. The authors invite the community for extensive evaluations of this deep-rooted optimizer as a promising tool for real-world optimization. The source codes, supplementary materials, and guidance for the developed method will be publicly available at different hubs at http://imanahmadianfar.com and http://aliasgharheidari.com/RUN.html .

429 citations

Journal ArticleDOI
TL;DR: The main purpose of this paper is to outline the state of the art and to identify open challenges concerning the most relevant areas within bio-inspired optimization, thereby highlighting the need for reaching a consensus and joining forces towards achieving valuable insights into the understanding of this family of optimization techniques.
Abstract: In recent years, the research community has witnessed an explosion of literature dealing with the mimicking of behavioral patterns and social phenomena observed in nature towards efficiently solving complex computational tasks. This trend has been especially dramatic in what relates to optimization problems, mainly due to the unprecedented complexity of problem instances, arising from a diverse spectrum of domains such as transportation, logistics, energy, climate, social networks, health and industry 4.0, among many others. Notwithstanding this upsurge of activity, research in this vibrant topic should be steered towards certain areas that, despite their eventual value and impact on the field of bio-inspired computation, still remain insufficiently explored to date. The main purpose of this paper is to outline the state of the art and to identify open challenges concerning the most relevant areas within bio-inspired optimization. An analysis and discussion are also carried out over the general trajectory followed in recent years by the community working in this field, thereby highlighting the need for reaching a consensus and joining forces towards achieving valuable insights into the understanding of this family of optimization techniques.

401 citations