scispace - formally typeset
Search or ask a question

Showing papers by "Alberto Prieto published in 2002"


Journal ArticleDOI
TL;DR: A sequential learning algorithm is presented to adapt the structure of the network, in which it is possible to create a new hidden unit and also to detect and remove inactive units, using a pseudo-Gaussian function.

148 citations


Journal ArticleDOI
TL;DR: A new clustering technique, specially designed for function approximation problems, is presented, which improves the performance of the approximator system obtained, compared with other models derived from traditional classification oriented clustering algorithms and input-output clustering techniques.
Abstract: To date, clustering techniques have always been oriented to solve classification and pattern recognition problems. However, some authors have applied them unchanged to construct initial models for function approximators. Nevertheless, classification and function approximation problems present quite different objectives. Therefore it is necessary to design new clustering algorithms specialized in the problem of function approximation. This paper presents a new clustering technique, specially designed for function. approximation problems, which improves the performance of the approximator system obtained, compared with other models derived from traditional classification oriented clustering algorithms and input-output clustering techniques.

129 citations


Journal ArticleDOI
TL;DR: An exhaustive analysis of the G-Prop method is presented, and the different parameters the method requires (population size, selection rate, initial weight range, number of training epochs, etc.) are determined.
Abstract: Interest in hybrid methods that combine artificial neural networks and evolutionary algorithms has grown in the last few years, due to their robustness and ability to design networks by setting initial weight values, by searching the architecture and the learning rule and parameters. This paper presents an exhaustive analysis of the G-Prop method, and the different parameters the method requires (population size, selection rate, initial weight range, number of training epochs, etc.) are determined. The paper also the discusses the influence of the application of genetic operators on the precision (classification ability or error) and network size in classification problems. The significance and relative importance of the parameters with respect to the results obtained, as well as suitable values for each, were obtained using the ANOVA (analysis of the variance). Experiments show the significance of parameters concerning the neural network and learning in the hybrid methods. The parameters found using this method were used to compare the G-Prop method both to itself with other parameter settings, and to other published methods.

74 citations


Journal ArticleDOI
TL;DR: This paper presents a reliable method to obtain the structure of a complete rule-based fuzzy system for a specific approximation accuracy of the training data and decides which input variables must be taken into account in the fuzzy system.
Abstract: The identification of a model is one of the key issues in the field of fuzzy system modeling and function approximation theory. There are numerous approaches to the issue of parameter optimization within a fixed fuzzy system structure but no reliable method to obtain the optimal topology of the fuzzy system from a set of input-output data. This paper presents a reliable method to obtain the structure of a complete rule-based fuzzy system for a specific approximation accuracy of the training data, i.e., it can decide which input variables must be taken into account in the fuzzy system and how many membership functions (MFs) are needed in every selected input variable in order to reach the approximation target with the minimum number of parameters.

64 citations


Book ChapterDOI
26 Jun 2002
TL;DR: In this article, the authors compare some of the freely available parallel Toolboxes for Matlab, which differ in purpose and implementation details: while DP-Toolbox and MultiMatlab offer a higher-level parallel environment, the goals of PVMTB and MPITB, developed by us, are to closely adhere to the PVM system and MPI standard, respectively.
Abstract: In this work we compare some of the freely available parallel Toolboxes for Matlab, which differ in purpose and implementation details: while DP-Toolbox and MultiMatlab offer a higher-level parallel environment, the goals of PVMTB and MPITB, developed by us [7], are to closely adhere to the PVM system and MPI standard, respectively. DP-Toolbox is also based on PVM, and MultiMATLAB on MPI. These Toolboxes allow the user to build a parallel application under the rapid-prototyping Matlab environment. The differences between them are illustrated by means of a performance test and a simple case study frequently found in the literature. Thus, depending on the preferred message-passing software and the performance requirements of the application, the user can either choose a higher-level Toolbox and benefit from easier coding, or directly interface the message-passing routines and benefit from greater control and performance.

17 citations


Book ChapterDOI
28 Aug 2002
TL;DR: An application of SVD model reduction to the class of RBF neural models for improving performance in contexts such as on-line prediction of time series and a difficult-to-predict, dynamically changing series is proposed.
Abstract: We propose an application of SVD model reduction to the class of RBF neural models for improving performance in contexts such as on-line prediction of time series The SVD is coupled with QR-cp factorization It has been found that such a coupling leads to more precise extraction of the relevant information, even when using it in an heuristic way Singular Spectrum Analysis (SSA) and its relation to our method is also mentioned We analize performance of the proposed on-line algorithm using a 'benchmark' chaotic time series and a difficult-to-predict, dynamically changing series

10 citations


Journal Article
TL;DR: In this paper, the authors proposed an application of SVD model reduction to the class of RBF neural models for improving performance in contexts such as on-line prediction of time series.
Abstract: We propose an application of SVD model reduction to the class of RBF neural models for improving performance in contexts such as on-line prediction of time series The SVD is coupled with QR-cp factorization It has been found that such a coupling leads to more precise extraction of the relevant information, even when using it in an heuristic way Singular Spectrum Analysis (SSA) and its relation to our method is also mentioned We analize performance of the proposed on-line algorithm using a 'benchmark' chaotic time series and a difficult-to-predict, dynamically changing series

9 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: From experimental results, this paper demonstrates the possible benefits offered by GAs in combination with BSS, such as robustness against local minima, the parallel search for various solutions, and a high degree of flexibility in the evaluation function.
Abstract: This paper proposes the fusion of two important paradigms, Genetic Algorithms and the Blind Separation of Sources in Nonlinear Mixtures (GABSS) Although the topic of BSS, by means of various techniques, including ICA, PCA, and neural networks, has been amply discussed in the literature, the possibility of using genetic algorithms has not been explored thus far However, in Nonlinear Mixtures, optimization of the system parameters and, especially, the search for invertible functions is very difficult due to the existence of many local minima From experimental results, this paper demonstrates the possible benefits offered by GAs in combination with BSS, such as robustness against local minima, the parallel search for various solutions, and a high degree of flexibility in the evaluation function

3 citations


Journal Article
TL;DR: The fundamental principles of GAs can be hybridized with classical optimization techniques for the design of an evolutive algorithm for neuro-fuzzy systems and the proposed algorithm preserves the robustness and global search capabilities ofGAs and improves on their performance.
Abstract: This paper describes how the fundamental principles of GAs can be hybridized with classical optimization techniques for the design of an evolutive algorithm for neuro-fuzzy systems. The proposed algorithm preserves the robustness and global search capabilities of GAs and improves on their performance, adding new capabilities to fine-tune the solutions obtained.

1 citations


Book ChapterDOI
28 Aug 2002
Abstract: This paper describes how the fundamental principles of GAs can be hybridized with classical optimization techniques for the design of an evolutive algorithm for neuro-fuzzy systems. The proposed algorithm preserves the robustness and global search capabilities of GAs and improves on their performance, adding new capabilities to fine-tune the solutions obtained.