scispace - formally typeset
Search or ask a question

Showing papers in "Neural Processing Letters in 2008"


Journal ArticleDOI
TL;DR: This work introduces two new methods of deriving the classical PCA in the framework of minimizing the mean square error upon performing a lower-dimensional approximation of the data and derives the optimal basis and the minimum error of approximation in this framework.
Abstract: We introduce two new methods of deriving the classical PCA in the framework of minimizing the mean square error upon performing a lower-dimensional approximation of the data. These methods are based on two forms of the mean square error function. One of the novelties of the presented methods is that the commonly employed process of subtraction of the mean of the data becomes part of the solution of the optimization problem and not a pre-analysis heuristic. We also derive the optimal basis and the minimum error of approximation in this framework and demonstrate the elegance of our solution in comparison with a recent solution in the framework.

150 citations


Journal ArticleDOI
TL;DR: The Time-delay Added Evolutionary Forecasting approach is a new method for time series prediction that performs an evolutionary search for the minimum number of dimensions necessary to represent the underlying information that generates the time series.
Abstract: The Time-delay Added Evolutionary Forecasting (TAEF) approach is a new method for time series prediction that performs an evolutionary search for the minimum number of dimensions necessary to represent the underlying information that generates the time series. The methodology proposed is inspired in Takens theorem and consists of an intelligent hybrid model composed of an artificial neural network combined with a modified genetic algorithm. Initially, the TAEF method finds the best fitted model to forecast the series and then performs a behavioral statistical test in order to adjust time phase distortions that may appear in the representation of some series. An experimental investigation conducted with relevant time series show the robustness of the method through a comparison, according to several performance measures, to previous results found in the literature and those obtained with more traditional methods.

106 citations


Journal ArticleDOI
TL;DR: Empirical and simulation studies suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA-Λ) or the naïve Bayes classifier and linear logistic regression may not be perfect, and hence it may be reliable for any claim that was derived from the comparison between LDA- Λ or the naive Bayesclassifier andlinear logistic regressors to be generalised to all generative and discriminative classifiers.
Abstract: Comparison of generative and discriminative classifiers is an ever-lasting topic. As an important contribution to this topic, based on their theoretical and empirical comparisons between the naive Bayes classifier and linear logistic regression, Ng and Jordan (NIPS 841---848, 2001) claimed that there exist two distinct regimes of performance between the generative and discriminative classifiers with regard to the training-set size. In this paper, our empirical and simulation studies, as a complement of their work, however, suggest that the existence of the two distinct regimes may not be so reliable. In addition, for real world datasets, so far there is no theoretically correct, general criterion for choosing between the discriminative and the generative approaches to classification of an observation x into a class y; the choice depends on the relative confidence we have in the correctness of the specification of either p(y|x) or p(x, y) for the data. This can be to some extent a demonstration of why Efron (J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc 75(369):154---160, 1980) prefer normal-based linear discriminant analysis (LDA) when no model mis-specification occurs but other empirical studies may prefer linear logistic regression instead. Furthermore, we suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA-?) or the naive Bayes classifier and linear logistic regression may not be perfect, and hence it may not be reliable for any claim that was derived from the comparison between LDA-? or the naive Bayes classifier and linear logistic regression to be generalised to all generative and discriminative classifiers.

91 citations


Journal ArticleDOI
Xiaodong Gu1
TL;DR: Unit-linking PCNN (Pulse Coupled Neural Network), the simplified model of PCNN consisting of spiking neurons, is used to code a 2-dimensional image into a 1-dimensional time sequence called global Unit-l linking PCNN image icon or time signature, including features of the original image and having the translation, rotation, and scale invariance.
Abstract: In this paper, we use Unit-linking PCNN (Pulse Coupled Neural Network), the simplified model of PCNN consisting of spiking neurons, to code a 2-dimensional image into a 1-dimensional time sequence called global Unit-linking PCNN image icon or time signature, including features of the original image and having the translation, rotation, and scale invariance. Dividing an image into multiple parts can obtain local Unit-linking PCNN image icons corresponding to the image's local regions, which can reflect the local changes of the image. In the meantime, the global and the local Unit-linking PCNN image icons are used in navigation, object detection, and image authentication. In navigation, global Unit-linking PCNN image icon shows qualified performance especially in non-stationary-video navigation. Object detection using global Unit-linking PCNN image icon, is independent of variances of translation, rotation, and scale, and object segmentation is avoided. In image authentication, using local Unit-linking PCNN image icon can authenticate correctly some juggled images failed to authenticate by using local histogram or local mean intensity, and can locate the juggled positions in the juggled images with some accuracy.

69 citations


Journal ArticleDOI
TL;DR: This paper proposes a new nonlinear system identification scheme using differential evolution (DE), neural network and Levenberg Marquardt algorithm (LM), and it has been confirmed that the proposed DE and LM trained NN approach to non linear system identification has yielded better identification results in terms of time of convergence and less identification error.
Abstract: This paper proposes a new nonlinear system identification scheme using differential evolution (DE), neural network and Levenberg Marquardt algorithm (LM). Here, DE and LM in a combined framework are used to train a neural network for achieving better convergence of neural network weight optimization. A number of examples including a practical case-study have been considered for implementation of different system identification methods namely, only NN, DE+NN and DE+LM+NN. After, a series of simulation studies of these methods on the different nonlinear systems it has been confirmed that the proposed DE and LM trained NN approach to nonlinear system identification has yielded better identification results in terms of time of convergence and less identification error.

65 citations


Journal ArticleDOI
TL;DR: This paper deals with the problem of passivity analysis for neural networks with time-varying delay, which is subject to norm-bounded time-Varying parameter uncertainties and proposes Delay-dependent passivity condition by using the free-weighting matrix approach.
Abstract: This paper deals with the problem of passivity analysis for neural networks with time-varying delay, which is subject to norm-bounded time-varying parameter uncertainties. The activation functions are supposed to be bounded and globally Lipschitz continuous. Delay-dependent passivity condition is proposed by using the free-weighting matrix approach. These passivity conditions are obtained in terms of linear matrix inequalities, which can be investigated easily by using standard algorithms. Two illustrative examples are provided to demonstrate the effectiveness of the proposed criteria.

50 citations


Journal ArticleDOI
TL;DR: A new dynamical model where synapses of the associative memory could be adjusted even after the training phase as a response to an input stimulus is described, providing some propositions that guarantee perfect and robust recall of the fundamental set of associations.
Abstract: The brain is not a huge fixed neural network, but a dynamic, changing neural network that continuously adapts to meet the demands of communication and computational needs. In classical neural networks approaches, particularly associative memory models, synapses are only adjusted during the training phase. After this phase, synapses are no longer adjusted. In this paper we describe a new dynamical model where synapses of the associative memory could be adjusted even after the training phase as a response to an input stimulus. We provide some propositions that guarantee perfect and robust recall of the fundamental set of associations. In addition, we describe the behavior of the proposed associative model under noisy versions of the patterns. At last, we present some experiments aimed to show the accuracy of the proposed model.

41 citations


Journal ArticleDOI
TL;DR: This paper focuses on the applicability of the features inspired by the visual ventral stream for handwritten character recognition, and an analysis is conducted to evaluate the robustness of this approach to orientation, scale and translation distortions.
Abstract: This paper focuses on the applicability of the features inspired by the visual ventral stream for handwritten character recognition. A set of scale and translation invariant C2 features are first extracted from all images in the dataset. Three standard classifiers kNN, ANN and SVM are then trained over a training set and then compared over a separate test set. In order to achieve higher recognition rate, a two stage classifier was designed with different preprocessing in the second stage. Experiments performed to validate the method on the well-known MNIST database, standard Farsi digits and characters, exhibit high recognition rates and compete with some of the best existing approaches. Moreover an analysis is conducted to evaluate the robustness of this approach to orientation, scale and translation distortions.

40 citations


Journal ArticleDOI
TL;DR: A constructive proof that a real, piecewise continuous function can be almost uniformly approximated by single hidden-layer feedforward neural networks (SLFNNs) is given.
Abstract: In this paper, we give a constructive proof that a real, piecewise continuous function can be almost uniformly approximated by single hidden-layer feedforward neural networks (SLFNNs). The construction procedure avoids the Gibbs phenomenon. Computer experiments show that the resulting approximant is much more accurate than SLFNNs trained by gradient descent.

37 citations


Journal ArticleDOI
TL;DR: To address the estimation problem, a method based on nearest neighbor graphs is suggested and its convergence properties under the assumption of a Hölder continuous regression function are discussed.
Abstract: In this paper, the problem of residual variance estimation is examined. The problem is analyzed in a general setting which covers non-additive heteroscedastic noise under non-iid sampling. To address the estimation problem, we suggest a method based on nearest neighbor graphs and we discuss its convergence properties under the assumption of a Holder continuous regression function. The universality of the estimator makes it an ideal tool in problems with only little prior knowledge available.

34 citations


Journal ArticleDOI
TL;DR: A new direct LDA method (called gradient LDA) for computing the orientation especially for small sample size problem is presented, which avoids discarding the null space of within-class scatter matrix and between- class scatter matrix which may have discriminative information useful for classification.
Abstract: The purpose of conventional linear discriminant analysis (LDA) is to find an orientation which projects high dimensional feature vectors of different classes to a more manageable low dimensional space in the most discriminative way for classification. The LDA technique utilizes an eigenvalue decomposition (EVD) method to find such an orientation. This computation is usually adversely affected by the small sample size problem. In this paper we have presented a new direct LDA method (called gradient LDA) for computing the orientation especially for small sample size problem. The gradient descent based method is used for this purpose. It also avoids discarding the null space of within-class scatter matrix and between-class scatter matrix which may have discriminative information useful for classification.

Journal ArticleDOI
TL;DR: A novel locality preserving projections (LPP) algorithm for image recognition, namely, the direct locality preserving projection (DLPP), which directly optimizes locality preserving criterion on high-dimensional raw images data via simultaneous diagonalization, without any dimensionality reduction preprocessing is proposed.
Abstract: This paper proposes a novel locality preserving projections (LPP) algorithm for image recognition, namely, the direct locality preserving projections (DLPP), which directly optimizes locality preserving criterion on high-dimensional raw images data via simultaneous diagonalization, without any dimensionality reduction preprocessing. Our algorithm is a direct and complete implementation of LPP. Experimental results on the PolyU palmprint database and ORL face database show the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: A new bidirectional hetero-associative memory model for true-color patterns that uses the associative model with dynamical synapses recently introduced in Vazquez and Sossa to guarantee perfect and robust recall of the fundamental set of associations.
Abstract: Classical bidirectional associative memories (BAM) have poor memory storage capacity, are sensitive to noise, are subject to spurious steady states during recall, and can only recall bipolar patterns. In this paper, we introduce a new bidirectional hetero-associative memory model for true-color patterns that uses the associative model with dynamical synapses recently introduced in Vazquez and Sossa (Neural Process Lett, Submitted, 2008). Synapses of the associative memory could be adjusted even after the training phase as a response to an input stimulus. Propositions that guarantee perfect and robust recall of the fundamental set of associations are provided. In addition, we describe the behavior of the proposed associative model under noisy versions of the patterns. At last, we present some experiments aimed to show the accuracy of the proposed model with a benchmark of true-color patterns.

Journal ArticleDOI
TL;DR: The methodology developed in this article is shown to be simple and effective for the exponential robust stability analysis of neural networks with time-varying delays and distributed delays.
Abstract: In this article, the global exponential robust stability is investigated for Cohen---Grossberg neural network with both time-varying and distributed delays. The parameter uncertainties are assumed to be time-invariant and bounded, and belong to given compact sets. Applying the idea of vector Lyapunov function, M-matrix theory and analysis techniques, several sufficient conditions are obtained to ensure the existence, uniqueness, and global exponential robust stability of the equilibrium point for the neural network. The methodology developed in this article is shown to be simple and effective for the exponential robust stability analysis of neural networks with time-varying delays and distributed delays. The results obtained in this article extend and improve a few recently known results and remove some restrictions on the neural networks. Three examples are given to show the usefulness of the obtained results that are less restrictive than recently known criteria.

Journal ArticleDOI
TL;DR: A new approach of designing adaptive inverse controller for synchronous generator excitation system containing nonsmooth nonlinearities in actuator device is presented, showing satisfactory control performance and illustrating the potential of the proposed adaptive inverse Controller as useful for practical purpose.
Abstract: In this paper, we present a new approach of designing adaptive inverse controller for synchronous generator excitation system containing nonsmooth nonlinearities in actuator device. The proposed controller considers not only the dynamics of generator but also nonlinearities in actuator. To address such a challenge, support vector machines (SVM) is adopted to identify the plant and to construct the inverse controller. SVM networks, used to compensate nonlinearities in synchronous generator as well as in actuator, are adjusted online by an adaptive law via back propagation (BP) algorithm. To guarantee convergence and for fast learning, adaptive learning rate and convergence theorem are developed. Simulation results are given, showing satisfactory control performance and illustrate the potential of the proposed adaptive inverse controller as useful for practical purpose.

Journal ArticleDOI
TL;DR: In this contribution, novel approaches are proposed for the improvement of the performance of Probabilistic Neural Networks as well as the recently proposed Evolutionary Probabilism Neural Networks.
Abstract: In this contribution, novel approaches are proposed for the improvement of the performance of Probabilistic Neural Networks as well as the recently proposed Evolutionary Probabilistic Neural Networks. The Evolutionary Probabilistic Neural Network's matrix of spread parameters is allowed to have different values in each class of neurons, resulting in a more flexible model that fits the data better and Particle Swarm Optimization is also employed for the estimation of the Probabilistic Neural Networks's prior probabilities of each class. Moreover, the bagging technique is used to create an ensemble of Evolutionary Probabilistic Neural Networks in order to further improve the model's performance. The above approaches have been applied to several well-known and widely used benchmark problems with promising results.

Journal ArticleDOI
TL;DR: Simulation results indicate that the adaptation procedure is able to guide the Hopfield network towards solutions of the problem starting with random values for weights and constraint weighting coefficients, effectively eliminating the guesswork in defining weight values for a given static optimization problem.
Abstract: This article presents a simulation study for validation of an adaptation methodology for learning weights of a Hopfield neural network configured as a static optimizer. The quadratic Liapunov function associated with the Hopfield network dynamics is leveraged to map the set of constraints associated with a static optimization problem. This approach leads to a set of constraint-specific penalty or weighting coefficients whose values need to be defined. The methodology leverages a learning-based approach to define values of constraint weighting coefficients through adaptation. These values are in turn used to compute values of network weights, effectively eliminating the guesswork in defining weight values for a given static optimization problem, which has been a long-standing challenge in artificial neural networks. The simulation study is performed using the Traveling Salesman problem from the domain of combinatorial optimization. Simulation results indicate that the adaptation procedure is able to guide the Hopfield network towards solutions of the problem starting with random values for weights and constraint weighting coefficients. At the conclusion of the adaptation phase, the Hopfield network acquires weight values which readily position the network to search for local minimum solutions. The demonstrated successful application of the adaptation procedure eliminates the need to guess or predetermine the values for weights of the Hopfield network.

Journal ArticleDOI
TL;DR: 8 different activation functions which are differentiable and utilizes Levenberg-Marquardt algorithm for parameter tuning purposes are considered and the studies carried out have a guiding quality based on empirical results on several training data sets.
Abstract: Feedforward neural network structures have extensively been considered in the literature. In a significant volume of research and development studies hyperbolic tangent type of a neuronal nonlinearity has been utilized. This paper dwells on the widely used neuronal activation functions as well as two new ones composed of sines and cosines, and a sinc function characterizing the firing of a neuron. The viewpoint here is to consider the hidden layer(s) as transforming blocks composed of nonlinear basis functions, which may assume different forms. This paper considers 8 different activation functions which are differentiable and utilizes Levenberg-Marquardt algorithm for parameter tuning purposes. The studies carried out have a guiding quality based on empirical results on several training data sets.

Journal ArticleDOI
Vimal Singh1
TL;DR: A modified form of a recent criterion for the global robust stability of interval-delayed Hopfield neural networks is presented and the effectiveness of the modified criterion is demonstrated with the help of an example.
Abstract: A modified form of a recent criterion for the global robust stability of interval-delayed Hopfield neural networks is presented. The effectiveness of the modified criterion is demonstrated with the help of an example.

Journal ArticleDOI
TL;DR: This paper deals with real-time implementation of visual-motor control of a 7 degree of freedom (DOF) robot manipulator using self-organized map (SOM) based learning approach and proposes a new clustering algorithm using Kohonen SOM lattice that maintains the fidelity of training data.
Abstract: This paper deals with real-time implementation of visual-motor control of a 7 degree of freedom (DOF) robot manipulator using self-organized map (SOM) based learning approach. The robot manipulator considered here is a 7 DOF PowerCube manipulator from Amtec Robotics. The primary objective is to reach a target point in the task space using only a single step movement from any arbitrary initial configuration of the robot manipulator. A new clustering algorithm using Kohonen SOM lattice has been proposed that maintains the fidelity of training data. Two different approaches have been proposed to find an inverse kinematic solution without using any orientation feedback. In the first approach, the inverse Jacobian matrices are learnt from the training data using function decomposition. It is shown that function decomposition leads to significant improvement in accuracy of inverse kinematic solution. In the second approach, a concept called sub-clustering in configuration space is suggested to provide multiple solutions for the inverse kinematic problem. Redundancy is resolved at position level using several criteria. A redundant manipulator is dexterous owing to the availability of multiple configurations for a given end-effector position. However, existing visual motor coordination schemes provide only one inverse kinematic solution for every target position even when the manipulator is kinematically redundant. Thus, the second approach provides a learning architecture that can capture redundancy from the training data. The training data are generated using explicit kinematic model of the combined robot manipulator and camera configuration. The training is carried out off-line and the trained network is used on-line to compute the joint angle vector to reach a target position in a single step only. The accuracy attained is better than the current state of art.

Journal ArticleDOI
TL;DR: The paper presents the stability and chaotic dynamical behavior of a class of ICA learning algorithms with constant learning rates, and some invariant sets are obtained so that the non-divergence of these algorithms can be guaranteed.
Abstract: Independent component analysis (ICA) neural networks can estimate independent components from the mixed signal. The dynamical behavior of the learning algorithms for ICA neural networks is crucial to effectively apply these networks to practical applications. The paper presents the stability and chaotic dynamical behavior of a class of ICA learning algorithms with constant learning rates. Some invariant sets are obtained so that the non-divergence of these algorithms can be guaranteed. In these invariant sets, the stability and chaotic behaviors are analyzed. The conditions for stability and chaos are derived. Lyapunov exponents and bifurcation diagrams are presented to illustrate the existence of chaotic behavior.

Journal ArticleDOI
TL;DR: This paper illustrates the equivalent relationship between ELM and the orthonormal method, and proves that neural networks with ELM are also universal approximations, and successfully applies ELM to the identification of QoS violation in the multimedia transmission.
Abstract: Neural networks have been successfully applied to many applications due to their approximation capability. However, complicated network structures and algorithms will lead to computational and time-consuming burdens. In order to satisfy demanding real-time requirements, many fast learning algorithms were explored in the past. Recently, a fast algorithm, Extreme Learning Machine (ELM) (Huang et al. 70:489---501, 2006) was proposed. Unlike conventional algorithms whose neurons need to be tuned, the input-to-hidden neurons of ELM are randomly generated. Though a large number of experimental results have shown that input-to-hidden neurons need not be tuned, there lacks a rigorous proof whether ELM possesses the universal approximation capability. In this paper, based on the universal approximation property of an orthonormal method, we firstly illustrate the equivalent relationship between ELM and the orthonormal method, and further prove that neural networks with ELM are also universal approximations. We also successfully apply ELM to the identification of QoS violation in the multimedia transmission.

Journal ArticleDOI
TL;DR: This paper is proposing a method based on Kohonen’s self-organizing map (SOM) that utilizes both content and context mining clustering techniques to help visitors identify relevant information quicker.
Abstract: Web sites contain an ever increasing amount of information within their pages. As the amount of information increases so does the complexity of the structure of the web site. Consequently it has become difficult for visitors to find the information relevant to their needs. To overcome this problem various clustering methods have been proposed to cluster data in an effort to help visitors find the relevant information. These clustering methods have typically focused either on the content or the context of the web pages. In this paper we are proposing a method based on Kohonen's self-organizing map (SOM) that utilizes both content and context mining clustering techniques to help visitors identify relevant information quicker. The input of the content mining is the set of web pages of the web site whereas the source of the context mining is the access-logs of the web site. SOM can be used to identify clusters of web sessions with similar context and also clusters of web pages with similar content. It can also provide means of visualizing the outcome of this processing. In this paper we show how this two-level clustering can help visitors identify the relevant information faster. This procedure has been tested to the access-logs and web pages of the Department of Informatics and Telecommunications of the University of Athens.

Journal ArticleDOI
TL;DR: This paper focuses on pairwise decomposition approach to multiclass classification with neural networks as the base learner for the dichotomies and reviews standard methods used to decode the decomposition generated by a one-against-one approach.
Abstract: A decomposition approach to multiclass classification problems consists in decomposing a multiclass problem into a set of binary ones. Decomposition splits the complete multiclass problem into a set of smaller classification problems involving only two classes (binary classification: dichotomies). With a decomposition, one has to define a recombination which recomposes the outputs of the dichotomizers in order to solve the original multiclass problem. There are several approaches to the decomposition, the most famous ones being one-against-all and one-against-one also called pairwise. In this paper, we focus on pairwise decomposition approach to multiclass classification with neural networks as the base learner for the dichotomies. We are primarily interested in the different possible ways to perform the so-called recombination (or decoding). We review standard methods used to decode the decomposition generated by a one-against-one approach. New decoding methods are proposed and compared to standard methods. A stacking decoding is also proposed which consists in replacing the whole decoding or a part of it by a trainable classifier to arbiter among the conflicting predictions of the pairwise classifiers. Proposed methods try to cope with the main problem while using pairwise decomposition: the use of irrelevant classifiers. Substantial gain is obtained on all datasets used in the experiments. Based on the above, we provide future research directions which consider the recombination problem as an ensemble method.

Journal ArticleDOI
TL;DR: Some novel sufficient conditions on pth moment exponential stability of the trivial solution of stochastic recurrent neural networks (SRNN) with time-varying interconnections and delays are established.
Abstract: This paper addresses the issue of pth moment exponential stability of stochastic recurrent neural networks (SRNN) with time-varying interconnections and delays. With the help of the Dini derivative of the expectation of V(t, X(t)) "along" the solution X(t) of the model and the technique of Halanay-type inequality, some novel sufficient conditions on pth moment exponential stability of the trivial solution has been established. Conclusions of the development as presented in this paper have gone beyond some published results and are helpful to design stability of networks when stochastic noise is taken into consideration. An example is also given to illustrate the effectiveness of our results.

Journal ArticleDOI
Zhiwu Lu1, Yuxin Peng1
TL;DR: The presented semi-supervised learning algorithm can automatically detect the number of Gaussian with a good parameter estimation, even when two or more actual Gaussians in the mixture are overlapped at a high degree.
Abstract: In Gaussian mixture modeling, it is crucial to select the number of Gaussians for a sample set, which becomes much more difficult when the overlap in the mixture is larger. Under regularization theory, we aim to solve this problem using a semi-supervised learning algorithm through incorporating pairwise constraints into entropy regularized likelihood (ERL) learning which can make automatic model selection for Gaussian mixture. The simulation experiments further demonstrate that the presented semi-supervised learning algorithm (i.e., the constrained ERL learning algorithm) can automatically detect the number of Gaussians with a good parameter estimation, even when two or more actual Gaussians in the mixture are overlapped at a high degree. Moreover, the constrained ERL learning algorithm leads to some promising results when applied to iris data classification and image database categorization.

Journal ArticleDOI
TL;DR: In this paper, two types of recurrent techniques for fuzzy CMAC neural networks are used to overcome the problems of memory consumption and dimension increase exponentially with the number of inputs, and the corresponding learning algorithms have time-varying learning rates.
Abstract: Normal fuzzy CMAC neural network performs well for nonlinear systems identification because of its fast learning speed and local generalization capability for approximating nonlinear functions. However, it requires huge memory and the dimension increases exponentially with the number of inputs. It is difficult to model dynamic systems with static fuzzy CMACs. In this paper, we use two types of recurrent techniques for fuzzy CMAC to overcome the above problems. The new CMAC neural networks are named recurrent fuzzy CMAC (RFCMAC) which add feedback connections in the inner layers (local feedback) or the output layer (global feedback). The corresponding learning algorithms have time-varying learning rates, the stabilities of the neural identifications are proven.

Journal ArticleDOI
TL;DR: This paper explains the temporal dependencies of independent component analysis by assuming that each source is an autoregressive (AR) process and innovations are independently and identically distributed (i.i.d).
Abstract: Independent component analysis is a fundamental and important task in unsupervised learning, that was studied mainly in the domain of Hebbian learning. In this paper, the temporal dependencies are explained by assuming that each source is an autoregressive (AR) process and innovations are independently and identically distributed (i.i.d). First, the likelihood of the model is derived, which takes into account both spatial and temporal information of the sources. Next, batch and on-line blind source separation algorithms are developed by maximizing likelihood function, and their local stability analysis are introduced simultaneously. Finally, computer simulations show that the algorithms achieve better separation of the mixed signals and mixed nature images which are difficult to be separated by the basic independent component analysis algorithms.

Journal ArticleDOI
TL;DR: The transformation of a sensor network into a neural Hopfield-like network (HLN) is proposed and the case of a 3-SN is developed in detail for illustrating the advantages of the suggested transformation.
Abstract: The transformation of a sensor network (SN) into a neural Hopfield-like network (HLN) is proposed. The SN of interest is a nonlinear non-reciprocal population of coupled oscillators. The proposed transformation is useful for investigating the relation between the structure of the SN and its capability of arriving to a global consensus. The case of a 3-SN is developed in detail for illustrating the advantages of the suggested transformation. Both the structural conditions necessary for achieving in this case the consensus and its relation to local measurements are presented.

Journal ArticleDOI
TL;DR: In the proposed research, dynamic neural networks were constructed to precisely approximate the non-linear system with unknown uncertainties and a recurrent neural network was employed as a neuro-solver to efficiently and numerically solve the standard LMI problem so as to obtain the appropriate control gains.
Abstract: In this paper, a new approach is investigated for adaptive dynamic neural network-based H∞ control, which is designed for a class of non-linear systems with unknown uncertainties. Currently, non-linear systems with unknown uncertainties are commonly used to efficiently and accurately express the real practical control process. Therefore, it is of critical importance but a great challenge and still at its early age to design a stable and robust controller for such a process. In the proposed research, dynamic neural networks were constructed to precisely approximate the non-linear system with unknown uncertainties first, a non-linear state feedback H∞ control law was designed next, then an adaptive weighting adjustment mechanism for dynamic neural networks was developed to achieve H∞ regulation performance, and last a recurrent neural network was employed as a neuro-solver to efficiently and numerically solve the standard LMI problem so as to obtain the appropriate control gains. Finally, case studies further verify the feasibility and efficiency of the proposed research.