scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Computational Intelligence and Applications in 2004"


Journal ArticleDOI
TL;DR: It turns out that EMO is preferable to the single objective approach in several respects and the evolved solutions perform considerably faster than an expert-designed architecture without loss of accuracy.
Abstract: For face recognition from video streams speed and accuracy are vital aspects The first decision whether a preprocessed image region represents a human face or not is often made by a feed-forward neural network (NN), eg in the Viisage-FaceFINDER® video surveillance system We describe the optimisation of such a NN by a hybrid algorithm combining evolutionary multi-objective optimisation (EMO) and gradient-based learning The evolved solutions perform considerably faster than an expert-designed architecture without loss of accuracy We compare an EMO and a single objective approach, both with online search strategy adaptation It turns out that EMO is preferable to the single objective approach in several respects

42 citations


Journal ArticleDOI
TL;DR: Experimental results show robust performance of the PCA based method against most prominent attacks and the presented technique has been successfully evaluated and compared with DCT and DWT based watermarking methods.
Abstract: We propose a robust digital watermarking technique based on Principal Component Analysis (PCA) and evaluate the effectiveness of the method against some watermark attacks. In this proposed method, watermarks are embedded in the PCA domain and the method is closely related to DCT or DWT based frequency-domain watermarking. The orthogonal basis functions, however, are determined by data and they are adaptive to the data. The presented technique has been successfully evaluated and compared with DCT and DWT based watermarking methods. Experimental results show robust performance of the PCA based method against most prominent attacks.

30 citations


Journal ArticleDOI
Pandian Vasant1
TL;DR: A new fuzzy linear programming based methodology using a modified S-curve membership function is used to solve fuzzy mix product selection problem in Industrial Engineering.
Abstract: Any modern industrial manufacturing unit inevitably faces problems of vagueness in various aspects such as raw material availability, human resource availability, processing capability and constraints and limitations imposed by marketing department. Such a complex problem of vagueness and uncertainty can be handled by the theory of fuzzy linear programming. In this paper, a new fuzzy linear programming based methodology using a modified S-curve membership function is used to solve fuzzy mix product selection problem in Industrial Engineering. Profits and satisfactory level have been computed using fuzzy linear programming approach. Since there are several decisions to be taken, a performance measure has been defined to identify the decision for high level of profit with high degree of satisfaction.

18 citations


Journal ArticleDOI
TL;DR: A new Hidden Markov Model Combining Scores approach that improves on previous methods for homologies detection of protein domains and six scoring algorithms are combined as a way of extracting features from a protein sequence.
Abstract: Few years back, Jaakkola and Haussler published a method of combining generative and discriminative approaches for detecting protein homologies. The method was a variant of support vector machines using a new kernel function called Fisher Kernel. They begin by training a generative hidden Markov model for a protein family. Then, using the model, they derive a vector of features called Fisher scores that are assigned to the sequence and then use support vector machine in conjunction with the fisher scores for protein homologies detection. In this paper, we revisit the idea of using a discriminative approach, and in particular support vector machines for protein homologies detection. However, in place of the Fisher scoring method, we present a new Hidden Markov Model Combining Scores approach. Six scoring algorithms are combined as a way of extracting features from a protein sequence. Experiments show that our method, improves on previous methods for homologies detection of protein domains.

16 citations


Journal ArticleDOI
TL;DR: This work applies Evolutionary Computation to achieve completely self-adapting Evolving Fuzzy Neural Networks (EFuNNs) for operating in both incremental and batch modes, demonstrating a substantial improvement in EFuNN's performance.
Abstract: This work applies Evolutionary Computation to achieve completely self-adapting Evolving Fuzzy Neural Networks (EFuNNs) for operating in both incremental (on-line) and batch (off-line) modes. EFuNNs belong to a class of Evolving Connectionist Systems (ECOS), capable of performing clustering-based, on-line, local area learning and rule extraction. Through Evolutionary Computation, its parameters such as learning rates and membership functions are continuously adjusted to reflect the changes in the dynamics of incoming data. The proposed methods are tested on the Mackey–Glass series and the results demonstrate a substantial improvement in EFuNN's performance.

15 citations


Journal ArticleDOI
TL;DR: The simple principal component analysis (SPCA) is applied to compress the dimensionality of portions that constitute a face and the difference in the value of cos θ between the true and the false smiles is clarified and the true smile is discriminated.
Abstract: Research on "man-machine interface" has increased in many fields of engineering and its application to facial expressions recognition is expected. The eigenface method by using the principal component analysis (PCA) is popular in this research field. However, it is not easy to compute eigenvectors with a large matrix if the cost of calculation when applying it for time-varying processing is taken into consideration. In this paper, in order to achieve high-speed PCA, the simple principal component analysis (SPCA) is applied to compress the dimensionality of portions that constitute a face. A value of cos θ is calculated using an eigenvector by SPCA as well as a gray-scale image vector of each picture pattern. By using neural networks (NNs), the difference in the value of cos θ between the true and the false (plastic) smiles is clarified and the true smile is discriminated. Finally, in order to show the effectiveness of the proposed face classification method for true or false smiles, computer simulations are done with real images. Furthermore, an experiment using the self-organisation map (SOM) is also conducted as a comparison.

11 citations


Journal ArticleDOI
TL;DR: A novel procedure for normalising Mercer kernel is suggested, which leads to a normalised kernel based FCM (NKFCM) clustering algorithm, which possesses strong adaptability to cluster structures within data samples.
Abstract: In this paper, a novel procedure for normalising Mercer kernel is suggested firstly. Then, the normalised Mercer kernel techniques are applied to the fuzzy c-means (FCM) algorithm, which leads to a normalised kernel based FCM (NKFCM) clustering algorithm. In the NKFCM algorithm, implicit assumptions about the shapes of clusters in the FCM algorithm is removed so that the new algorithm possesses strong adaptability to cluster structures within data samples. Moreover, a new method for calculating the prototypes of clusters in input space is also proposed, which is essential for data clustering applications. Experimental results on several benchmark datasets have demonstrated the promising performance of the NKFCM algorithm in different scenarios.

11 citations


Journal ArticleDOI
TL;DR: This paper presents the design and analysis of several systems to solve Vehicle Routing Problems with Time Windows limiting the search to a small number of solutions explored, which combine a metaheuristic technique with a route building heuristic.
Abstract: This paper presents the design and analysis of several systems to solve Vehicle Routing Problems with Time Windows (VRPTW) limiting the search to a small number of solutions explored. All of them combine a metaheuristic technique with a route building heuristic. Simulated Annealing, different evolutionary approaches and hybrid methods have been tried. Preliminary results for each of the strategies are presented in the paper, where the combination created by some iterations of the best evolutionary approach and some iterations of SA stands out. A more exhaustive analysis of the three methods behaving better is also presented confirming the previous results. The different strategies have been implemented and tested on a series of the well-known Solomon's benchmark problems of size up to 100 customers. One of the described systems combined with a local optimization part that tries to optimize parts of a solution is being used as part of a real oil distribution system, obtaining very satisfactory results for the company.

10 citations


Journal ArticleDOI
TL;DR: This paper addresses the issue of obtaining the optimal rotation to match two functions on the sphere by minimizing the squared error norm and the Kullback–Leibler information criteria.
Abstract: This paper addresses the issue of obtaining the optimal rotation to match two functions on the sphere by minimizing the squared error norm and the Kullback–Leibler information criteria. In addition, the accuracy in terms of the band-limited approximations in both cases are also discussed. Algorithms for fast and accurate rotational matching play a significant role in many fields ranging from computational biology to spacecraft attitude estimation. In electron microscopy, peaks in the so-called "rotation function" determine correlations in orientation between density maps of macromolecular structures when the correspondence between the coordinates of the structures is not known. In X-ray crystallography, the rotational matching of Patterson functions in Fourier space is an important step in the determination of protein structures. In spacecraft attitude estimation, a star tracker compares observed patterns of stars with rotated versions of a template that is stored in its memory. Many algorithms for comput...

8 citations


Journal ArticleDOI
TL;DR: Wolfram’s main thesis is that the authors now have the computational means to look at theories based on simpler rules that may not admit of shortcut routes to their predictions, and that this new way of doing science is to run computer simulations of complex systems.
Abstract: This is a very unusual book. In 846 pages of text and 349 pages of notes, author Stephen Wolfram claims no less than that we have all been doing science with an outmoded set of analytical tools and that he has invented a new approach that can lead to rapid progress in fields ranging from mathematics to biology. This new way of doing science is, in short, to run computer simulations of complex systems. Wolfram’s claim is based on two major subclaims: First, that essentially all phenomena of interest to scientists can be described and studied as computations; and second, in what Wolfram calls the Principle of Computational Equivalence, that above a certain level of simplicity, essentially all computational systems embody a similar level of complexity. In an idea that seems close in spirit to Turing’s treatment of the halting problem, he further suggests that the only way to predict the behavior of such systems is to run them on a computer and see what happens. Embedded in this innocent-sounding proposal is a very deep criticism of the way theoretical science, particularly physics and mathematics, has developed up to the present day, namely, that scientists have always selected which theories to pursue based on a kind of aesthetic principle which favors those theories that most easily yield predictions via standard mathematical analysis. The reason for this idea of beauty in theories is that before the advent of modern computers it was not practical to develop predictions from theories by working out the consequences of simple rules at very large numbers of points in space and time; rather, short cuts, for example, the analytical solution of equations that might be more complex, had to be used to reduce the amount of work involved in projecting the outcome of a theory to distant times and places. Wolfram’s main thesis, and, indeed, I think the main contribution of this book, is to point out that we now have the computational means to look at theories based on simpler rules that may not admit of shortcut routes to their predictions.

8 citations


Journal ArticleDOI
TL;DR: It is shown that the stability analysis problems of nonlinear interconnected systems can be reduced to linear matrix inequality (LMI) problems via suitable Lyapunov functions and T–S fuzzy techniques.
Abstract: This paper is concerned with the stability problem of nonlinear interconnected systems. Each of them consists of a few interconnected subsystems which are approximated by Takagi–Sugeno (T–S) type fuzzy models. In terms of Lyapunov's direct method, a stability criterion is derived to guarantee the asymptotic stability of interconnected systems. It is shown that the stability analysis problems of nonlinear interconnected systems can be reduced to linear matrix inequality (LMI) problems via suitable Lyapunov functions and T–S fuzzy techniques. Finally, numerical examples with simulations are given to demonstrate the validity of the proposed approach.

Journal ArticleDOI
TL;DR: The goal of domain independent object recognition was achieved for the detection of relatively small regular objects in larger images with relatively uncluttered backgrounds, however, detection performance on irregular objects in complex, highly cluttered backgrounds such as the retina pictures has not been achieved to an acceptable level.
Abstract: This paper describes a domain independent approach to multiple class rotation invariant 2D object detection problems. The approach avoids preprocessing, segmentation and specific feature extraction. Instead, raw image pixel values are used as inputs to the learning systems. Five object detection methods have been developed and tested, the basic method and four variations which are expected to improve the accuracy of the basic method. In the basic method cutouts of the objects of interest are used to train multilayer feed forward networks using back propagation. The trained network is then used as a template to sweep the full image and find the objects of interest. The variations are (1) Use of a centred weight initialization method in network training, (2) Use of a genetic algorithm to train the network, (3) Use of a genetic algorithm, with fitness based on detection rate and false alarm rate, to refine the weights found in basic approach, and (4) Use of the same genetic algorithm to refine the weights found by method 2. These methods have been tested on three detection problems of increasing difficulty: an easy database of circles and squares, a medium difficulty database of coins and a very difficult database of retinal pathologies. For detecting the objects in all classes of interest in the easy and the medium difficulty problems, a 100% detection rate with no false alarms was achieved. However the results on the retinal pathologies were unsatisfactory. The centred weight initialization algorithm improved the detection performance over the basic approach on all three databases. In addition, refinement of weights with a genetic algorithm significantly improved detection performance on the three databases. The goal of domain independent object recognition was achieved for the detection of relatively small regular objects in larger images with relatively uncluttered backgrounds. Detection performance on irregular objects in complex, highly cluttered backgrounds such as the retina pictures, however, has not been achieved to an acceptable level.

Journal ArticleDOI
TL;DR: It is demonstrated that all these approaches lead to a population of agents very successfully playing checkers against a backtrack algorithm with the search depth 3.
Abstract: Emergence of game strategy in multiagent systems is studied. Symbolic and subsymbolic (neural network) approaches are compared. Symbolic approach is represented by a backtrack algorithm with specified search depth, whereas the subsymbolic approach is represented by feedforward neural networks that are adapted by reinforcement temporal difference TD(λ) technique. As a test game, we use simplified checkers. The problem is studied in the framework of multiagent system, where each agent is endowed with a neural network used for a classification of checkers positions. Three different strategies are used. The first strategy corresponds to a single agent that repeatedly plays games against MinMax version of a backtrack search method. The second strategy corresponds to single agents that are repeatedly playing a megatournament, where each agent plays two different games with all other agents, one game with white pieces and the other game with black pieces. After finishing each game, both agents modify their neural networks by reinforcement learning. The third strategy is an evolutionary modification of the second one. When a megatournament is finished, each agent is evaluated by a fitness, which reflects its success in the given megatournament (more successful agents have greater fitness). It is demonstrated that all these approaches lead to a population of agents very successfully playing checkers against a backtrack algorithm with the search depth 3.

Journal ArticleDOI
TL;DR: A keyword specification method for images, which can be used to retrieve an image by a keyword, is proposed, in order to specify a keyword for a sub-region of the image, images are segmented in some regions.
Abstract: We propose a keyword specification method for images, which can be used to retrieve an image by a keyword. In order to specify a keyword for a sub-region of the image, images are segmented in some regions. Here, we consider ten keywords to specify the regions. The image segmentation method consists of the maximum-distance algorithm, labeling, and merging the small regions. We provide training regions for each keyword. Important features of the keyword are selected using the Factor Analysis (FA). The features are compressed into a two-dimensional space using a Sandglass-type Neural Network (SNN). We train the SNN using the training dataset. Then 60 samples of the testing dataset are evaluated by keywords extraction.

Journal ArticleDOI
TL;DR: A method for extracting Zadeh–Mamdani fuzzy rules from a minimalist constructive neural network model is described, which contains no embedded fuzzy logic elements.
Abstract: A method for extracting Zadeh–Mamdani fuzzy rules from a minimalist constructive neural network model is described. The network contains no embedded fuzzy logic elements. The rule extraction algorithm needs no modification of the neural network architecture. No modification of the network learning algorithm is required, nor is it necessary to retain any training examples. The algorithm is illustrated on two well known benchmark data sets and compared with a relevant existing rule extraction algorithm.

Journal ArticleDOI
TL;DR: A new computational method in searching for the conserved WFS in genomes is presented, defined as the difference of free energies between the computed optimal structure (OS) and its corresponding optimal restrained structure where all the previous base pairings in the OS are forbidden.
Abstract: Recent advances in RNA studies show that the well-ordered, structured RNAs perform a broad functions in various biological mechanisms. Included among these functions are regulations of gene expression at multiple levels by diversified ribozymes and various RNA regulatory elements. The discovered microRNAs (miRNAs) with a distinct stem-loops are a new class of RNA regulatory elements. The prediction of those well-ordered folding sequences (WFS) associated with the RNA regulatory elements in genomic sequences is very helpful for our understandings of RNA-based gene regulations. We present here a new computational method in searching for the conserved WFS in genomes. In the method, the WFS is assessed by a quantitative measure Ediff that is defined as the difference of free energies between the computed optimal structure (OS) and its corresponding optimal restrained structure where all the previous base pairings in the OS are forbidden. From those WFS with high Ediff scores, the conserved WFS is determined b...

Journal ArticleDOI
TL;DR: A system for mining surface motifs: SUMOMO which consists of two phases: surface motif extraction and surface motif filtering was developed and Motifs corresponding to all 4 known functional sites were recognised.
Abstract: Protein surface motifs, which can be defined as commonly appearing patterns of shape and physical properties in protein molecular surfaces, can be considered "possible active sites". We have developed a system for mining surface motifs: SUMOMO which consists of two phases: surface motif extraction and surface motif filtering. In the extraction phase, a given set of protein molecular surface data is divided into small surfaces called unit surfaces. After extracting several common unit surfaces as candidate motifs, they are repetitively merged into surface motifs. However, a large amount of surface motifs is extracted in this phase, making it difficult to distinguish whether the extracted motifs are significant to be considered active sites. Since active sites from proteins with a particular function have similar shape and physical properties, proteins can be classified based on similarity among local surfaces. Thus, in the filtering phase, local surfaces extracted from proteins of the same group are considered significant motifs, and the rest are filtered out. The proposed method was applied to discover surface motifs from 15 proteins belonging to four function groups. Motifs corresponding to all 4 known functional sites were recognised.

Journal ArticleDOI
TL;DR: A digital watermarking scheme based on vector quantisation (VQ) for gray watermark is proposed, which embeds the encoded indices into the cover image in VQ domain then and a genetic index assignment (GIA) procedure is proposed to improve the performance of the watermarked scheme.
Abstract: A digital watermarking scheme based on vector quantisation (VQ) for gray watermark is proposed. It begins with the procedure of encoding the gray watermark, and embeds the encoded indices into the cover image in VQ domain then. To improve the performance of the watermarking scheme, a genetic index assignment (GIA) procedure, which modifies the signal of the watermark to suit the signal of the cover image, is proposed. The proposed gray watermark embedding scheme is easy to implement, requires no original cover image to be presented during extraction, expands the size of the used watermark, and provides better watermarked results. Experimental results will show the novelty and effectiveness of it.

Journal ArticleDOI
TL;DR: This paper presents a hybrid evolutionary algorithm for constrained multiple destinations routing problem based on a population based incremental learning algorithm and a constrained distance network heuristic (or CKMB) algorithm.
Abstract: This paper presents a hybrid evolutionary algorithm for constrained multiple destinations routing problem. The problem can be formulated as minimising tree cost under several constraints or QoS metrics. Computing such constrained multicast tree has been proven to be NP-complete. The proposed hybrid algorithm is based on a population based incremental learning algorithm and a constrained distance network heuristic (or CKMB) algorithm. In the proposed algorithm, CKMB is utilised as a decoding scheme. Experimental results show that, in most cases, the proposed algorithm yields better solutions than other heuristic algorithms proposed in the literature including the best known one BSMA.

Journal ArticleDOI
TL;DR: The simulation results indicate that the use of the CCGA is proven to be highly efficient in terms of the minimal energy cost obtained in comparison to the results given by the searches using a standard genetic algorithm and a dynamic programming technique.
Abstract: This paper presents the use of a co-operative co-evolutionary genetic algorithm (CCGA) for solving optimal control problems in a hysteresis system. The hysteresis system is a hybrid control system which can be described by a continuous multivalued state-space representation that can switch between two possible discrete modes. The problems investigated cover the optimal control of the hysteresis system with fixed and free final state/time requirements. With the use of the Pontryagin maximum principle, the optimal control problems can be formulated as optimisation problems. In this case, the decision variables consist of the value of control signal when a switch between discrete modes occurs while the objective value is calculated from an energy cost function. The simulation results indicate that the use of the CCGA is proven to be highly efficient in terms of the minimal energy cost obtained in comparison to the results given by the searches using a standard genetic algorithm and a dynamic programming technique. This helps to confirm that the CCGA can handle complex optimal control problems by exploiting a co-evolutionary effect in an efficient manner.

Journal ArticleDOI
TL;DR: This paper presents a parallel genetic algorithm to solve the site layout problem with unequal-size and constrained facilities, characterised by affinity weights used to model transportation costs between facilities, and by geometric constraints between relative positions of facilities on site.
Abstract: Parallel genetic algorithms techniques have been used in a variety of computer engineering and science areas. This paper presents a parallel genetic algorithm to solve the site layout problem with unequal-size and constrained facilities. The problem involves coordinating the use of limited space to accommodate temporary facilities subject to geometric constraints. The problem is characterised by affinity weights used to model transportation costs between facilities, and by geometric constraints between relative positions of facilities on site. The algorithm is parallelised based on a message passing SPMD architecture using parallel search and chromosomes migration. The algorithm is tested on a variety of layout problems to illustrate its performance. In specific, in the case of: (1) loosely versus tightly constrained layouts with equal levels of interaction between facilities, (2) loosely versus tightly packed layouts with variable levels of interactions between facilities, and (3) loosely versus tightly constrained layouts. Favorable results are reported.

Journal ArticleDOI
TL;DR: The approach is to partition the observed signals into several parts and to extract the partitioned observations with a simple activation function performing only the "shift-and-add" micro-operation, which provides comparable efficiency with other approaches, but lower space and time complexity.
Abstract: Although several highly accurate blind source separation algorithms have already been proposed in the literature, these algorithms must store and process the whole data set which may be tremendous in some situations. This makes the blind source separation infeasible and not realisable on VLSI level, due to a large memory requirement and costly computation. This paper concerns the algorithms for solving the problem of tremendous data sets and high computational complexity, so that the algorithms could be run on-line and implementable on VLSI level with acceptable accuracy. Our approach is to partition the observed signals into several parts and to extract the partitioned observations with a simple activation function performing only the "shift-and-add" micro-operation. No division, multiplication and exponential operations are needed. Moreover, obtaining an optimal initial de-mixing weight matrix for speeding up the separating time will be also presented. The proposed algorithm is tested on some benchmarks available online. The experimental results show that our solution provides comparable efficiency with other approaches, but lower space and time complexity.

Journal ArticleDOI
TL;DR: An evolutionary account relating motivational systems to emotions and the cortical systems which elaborate them is outlined and the issue of whether and how to characterise emotions in such a way that one might say that a robot has emotions even if they are not empathically linked to human emotions is addressed.
Abstract: The article outlines an evolutionary account relating motivational systems to emotions and the cortical systems which elaborate them. It then addresses the issue of whether and how to characterise emotions in such a way that one might say that a robot has emotions even if they are not empathically linked to human emotions.

Journal ArticleDOI
TL;DR: An approach for real-time blind source separation (BSS), in which the observations are linear convolutive mixtures of statistically independent acoustic sources, is proposed and an exponentially weighted recursive BSS algorithm is realised, which can converge to a much lower cost value than that of the gradient algorithm.
Abstract: We propose an approach for real-time blind source separation (BSS), in which the observations are linear convolutive mixtures of statistically independent acoustic sources. A recursive least square (RLS)-like strategy is devised for real-time BSS processing. A normal equation is further introduced as an expression between the separation matrix and the correlation matrix of observations. We recursively estimate the correlation matrix and explicitly, rather than stochastically, solve the normal equation to obtain the separation matrix. As an example of application, the approach has been applied to a BSS problem where the separation criterion is based on the second-order statistics and the non-stationarity of signals in the frequency domain. In this way, we realise a novel BSS algorithm, called exponentially weighted recursive BSS algorithm. The simulation and experimental results showed an improved separation and a superior convergence rate of the proposed algorithm over that of the gradient algorithm. Moreover, this algorithm can converge to a much lower cost value than that of the gradient algorithm.

Journal ArticleDOI
TL;DR: The discriminant function outperforms results of SVMs using a kernel function using intensities of all pixels (based on independently published results), in face detection experiments over the 24,045 test images in the MIT-CBCL database.
Abstract: A retina has a space-variant sampling mechanism and an orientation-sensitive mechanism. The space-variant sampling mechanism of the retina is called retinotopic sampling (RS).Withthese mechanisms of the retina, the object-detection isformulated as finding appropriatecoordinate transformation from a coordinate system on an input image, to a coordinate system on the retina. However, when the object size is inferred by thismechanism, the result tends to gravitate towards zero. To cancel this gravity, the space-variant sampling mechanism is modified to uniform sampling mechanism, but a concept of RS is equivalently introduced by using space-variant weights. This object-detection mechanism is modeled as a non-parametric method. By using the model based on RS, we formulate a kernel function as an analytical function of information of an object, a position and a size of the object in an image. Then the object-detection is realized as a gradient decent method for a discriminant function trained by Support Vector Machine (SVM) using this kernel function. This detection mechanism realizes faster detection than exploring a visual scene in raster-like fashion. The discriminant function outperforms results of SVMs using a kernel function using intensities of all pixels (based on independently published results), in face detection experiments over the 24,045 test images in the MIT-CBCL database.

Journal ArticleDOI
Gancho Vachkov1
TL;DR: The newly proposed integrated unsupervised and Error-Driven learning algorithm is the main part of the incremental identification procedure and is a modified version of the Neural-Gas learning algorithm, which is able to drive the neurons to the "more interesting" areas in the input space, where bigger evaluation errors exist, instead of just locating them in the areas of higher data density.
Abstract: In the paper a special incremental identification procedure for learning a Neural Singleton (NS) Model, as a simplification of the RBFNN model is proposed. At each epoch of this incremental procedure a special integrated learning algorithm for the neuron centers and then a supervised learning for the singletons are performed. As a result, the approximation error of the NS model is gradually decreased with the increment of the neurons at each learning epoch. The newly proposed integrated unsupervised and Error-Driven learning algorithm is the main part of the incremental identification procedure. It is a modified version of the Neural-Gas learning algorithm, which uses the normalised evaluation error as feedback information. Thus the integrated learning algorithm is able to drive the neurons to the "more interesting" areas in the input space, where bigger evaluation errors exist, instead of just locating them in the areas of higher data density. The final result is an incremental growing type of NS model that gradually improves its approximation accuracy at each learning epoch. A detailed test example is shown in the paper in order to evaluate the performance and to show the important features of the whole incremental identification procedure, including the integrated learning algorithm. Comparison results with the conventional (one-epoch and non-incremental) model are also given.

Journal ArticleDOI
TL;DR: The proposed method evaluates its effect based on training of a dataset, and eliminates those intervals of the variable values that contribute negatively when processed by the network, and shows that the best network with selection can produce better performance accuracy and smaller network size.
Abstract: Selecting parameters can be a powerful mechanism in constructing new evolving connectionist network. However, if a parameter contains partial information such that only some of the values are relevant and others are not, then a selection of the subset of relevant values is more appropriate. Considering the possible values of a parameter of a processing connectionist network as the outcomes of a variable, this research focuses on selecting interval values of the variable. It also considers the partitioning schemes used in generating the intervals from the outcomes of a variable. The goal of this work is to explore variable value selection and its effect in an evolving connectionist network. Using input variables in a backpropagation network, the proposed method evaluates its effect based on training of a dataset, and eliminates those intervals of the variable values that contribute negatively when processed by the network. When a value falls into an interval that has been selected and ignored, it is analogous to a network without processing the corresponding variable, and vice versa. Two approaches for interval partitioning are considered, based on equal-probability (or maximum entropy) and equal-width partitioning scheme. Comparing the best performing network with selection and the one without selection, the experimental results show that the best network with selection can produce better performance accuracy and smaller network size.