This paper proposes a general framework for designing neural network ensembles by means of cooperative coevolution, and applies the proposed model to ten real-world classification problems of a very different nature from the UCI machine learning repository and proben1 benchmark set.
Abstract:
This paper presents a cooperative coevolutive approach for designing neural network ensembles. Cooperative coevolution is a recent paradigm in evolutionary computation that allows the effective modeling of cooperative environments. Although theoretically, a single neural network with a sufficient number of neurons in the hidden layer would suffice to solve any problem, in practice many real-world problems are too hard to construct the appropriate network that solve them. In such problems, neural network ensembles are a successful alternative. Nevertheless, the design of neural network ensembles is a complex task. In this paper, we propose a general framework for designing neural network ensembles by means of cooperative coevolution. The proposed model has two main objectives: first, the improvement of the combination of the trained individual networks; second, the cooperative evolution of such networks, encouraging collaboration among them, instead of a separate training of each network. In order to favor the cooperation of the networks, each network is evaluated throughout the evolutionary process using a multiobjective method. For each network, different objectives are defined, considering not only its performance in the given problem, but also its cooperation with the rest of the networks. In addition, a population of ensembles is evolved, improving the combination of networks and obtaining subsets of networks to form ensembles that perform better than the combination of all the evolved networks. The proposed model is applied to ten real-world classification problems of a very different nature from the UCI machine learning repository and proben1 benchmark set. In all of them the performance of the model is better than the performance of standard ensembles in terms of generalization error. Moreover, the size of the obtained ensembles is also smaller.
TL;DR: This article provides a general overview of the field now known as "evolutionary multi-objective optimization," which refers to the use of evolutionary algorithms to solve problems with two or more (often conflicting) objective functions.
TL;DR: A novel optimization algorithm, group search optimizer (GSO), which is inspired by animal behavior, especially animal searching behavior, and has competitive performance to other EAs in terms of accuracy and convergence speed, especially on high-dimensional multimodal problems.
TL;DR: Different approaches to each of these phases that are able to deal with the regression problem are discussed, categorizing them in terms of their relevant characteristics and linking them to contributions from different fields.
TL;DR: This paper proposes a new coevolutionary paradigm that hybridizes competitive and cooperative mechanisms observed in nature to solve multiobjective optimization problems and to track the Pareto front in a dynamic environment.
TL;DR: This work employs two problem decomposition methods for training Elman recurrent neural networks on chaotic time series problems and shows improvement in performance in terms of accuracy when compared to some of the methods from literature.
TL;DR: In this article, the authors present the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields, including computer programming and mathematics.
TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
TL;DR: Thorough, well-organized, and completely up to date, this book examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks.
Q1. What have the authors contributed in "Cooperative coevolution of artificial neural network ensembles for pattern classification" ?
This paper presents a cooperative coevolutive approach for designing neural network ensembles. In this paper, the authors propose a general framework for designing neural network ensembles by means of cooperative coevolution.
Q2. What are the future works in "Cooperative coevolution of artificial neural network ensembles for pattern classification" ?
Arcing and Ada-Boosting methods also suggest the possibility of developing an incremental cooperative environment where new subpopulations are added when the evolution stagnates.
Q3. What is the important feature of the proposed algorithm?
The proposed algorithm is based on the concept of Pareto optimality [78] and has been chosen taking into account as the most important feature the computational cost.
Q4. What are the commonly used methods for combining the networks?
The most commonly used methods for combining the networks are the majority voting and sum of the outputs of the networks, both of them with a weight vector that measures the confidence in the prediction of each network.
Q5. What are the common parametric mutation operators?
Many parametric mutation operators have been suggested in the specific literature: random modification of the weights [12], simulated annealing [35], and back-propagation [35], among others.
Q6. What other techniques have been proposed in the last few years?
Many other techniques have been proposed in the last few years, such as, linear regression [48], principal components analysis and least-square regression [51], correspondence analysis [6], and the use of a validation set [25].
Q7. How many networks did Opitz and Maclin find that the error of the ensemble does not decrease?
Opitz and Maclin [85] have found after some exhaustive experiments that the error of the ensemble does not decrease after adding 25 networks.
Q8. How many subpopulations of networks did the authors test?
In order to test the influence of the number of network subpopulations, that is, the size of the ensembles, the authors carried out experiments for Cancer, Glass, Heart, Horse, and Pima problems with 5, 10, 15, 25, and 30 subpopulations of networks.
Q9. How do the authors represent the function vectors of a network?
In order to represent the functionality of the network, the authors perform a principal component analysis of the function vectors, retaining the first two components.