scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Neural Networks in 2010"


Journal ArticleDOI
TL;DR: A barrier Lyapunov function (BLF) is introduced to address two open and challenging problems in the neuro-control area: for any initial compact set, how to determine a priori the compact superset on which NN approximation is valid; and how to ensure that the arguments of the unknown functions remain within the specified compact supersets.
Abstract: In this brief, adaptive neural control is presented for a class of output feedback nonlinear systems in the presence of unknown functions. The unknown functions are handled via on-line neural network (NN) control using only output measurements. A barrier Lyapunov function (BLF) is introduced to address two open and challenging problems in the neuro-control area: 1) for any initial compact set, how to determine a priori the compact superset, on which NN approximation is valid; and 2) how to ensure that the arguments of the unknown functions remain within the specified compact superset. By ensuring boundedness of the BLF, we actively constrain the argument of the unknown functions to remain within a compact superset such that the NN approximation conditions hold. The semiglobal boundedness of all closed-loop signals is ensured, and the tracking error converges to a neighborhood of zero. Simulation results demonstrate the effectiveness of the proposed approach.

818 citations


Journal ArticleDOI
TL;DR: The proposed OP-ELM methodology performs several orders of magnitude faster than the other algorithms used in this brief, except the original ELM, and is still able to maintain an accuracy that is comparable to the performance of the SVM.
Abstract: In this brief, the optimally pruned extreme learning machine (OP-ELM) methodology is presented. It is based on the original extreme learning machine (ELM) algorithm with additional steps to make it more robust and generic. The whole methodology is presented in detail and then applied to several regression and classification problems. Results for both computational time and accuracy (mean square error) are compared to the original ELM and to three other widely used methodologies: multilayer perceptron (MLP), support vector machine (SVM), and Gaussian process (GP). As the experiments for both regression and classification illustrate, the proposed OP-ELM methodology performs several orders of magnitude faster than the other algorithms used in this brief, except the original ELM. Despite the simplicity and fast performance, the OP-ELM is still able to maintain an accuracy that is comparable to the performance of the SVM. A toolbox for the OP-ELM is publicly available online.

745 citations


Journal ArticleDOI
TL;DR: It is proved that the proposed robust backstepping control is able to guarantee semiglobal uniform ultimate boundedness of all signals in the closed-loop system.
Abstract: In this paper, robust adaptive neural network (NN) control is investigated for a general class of uncertain multiple-input-multiple-output (MIMO) nonlinear systems with unknown control coefficient matrices and input nonlinearities. For nonsymmetric input nonlinearities of saturation and deadzone, variable structure control (VSC) in combination with backstepping and Lyapunov synthesis is proposed for adaptive NN control design with guaranteed stability. In the proposed adaptive NN control, the usual assumption on nonsingularity of NN approximation for unknown control coefficient matrices and boundary assumption between NN approximation error and control input have been eliminated. Command filters are presented to implement physical constraints on the virtual control laws, then the tedious analytic computations of time derivatives of virtual control laws are canceled. It is proved that the proposed robust backstepping control is able to guarantee semiglobal uniform ultimate boundedness of all signals in the closed-loop system. Finally, simulation results are presented to illustrate the effectiveness of the proposed adaptive NN control.

670 citations


Journal ArticleDOI
TL;DR: It is shown using Lyapunov theory that the position, orientation, and velocity tracking errors, the virtual control and observer estimation errors, and the NN weight estimation errors for each NN are all semiglobally uniformly ultimately bounded (SGUUB) in the presence of bounded disturbances and NN functional reconstruction errors while simultaneously relaxing the separation principle.
Abstract: In this paper, a new nonlinear controller for a quadrotor unmanned aerial vehicle (UAV) is proposed using neural networks (NNs) and output feedback. The assumption on the availability of UAV dynamics is not always practical, especially in an outdoor environment. Therefore, in this work, an NN is introduced to learn the complete dynamics of the UAV online, including uncertain nonlinear terms like aerodynamic friction and blade flapping. Although a quadrotor UAV is underactuated, a novel NN virtual control input scheme is proposed which allows all six degrees of freedom (DOF) of the UAV to be controlled using only four control inputs. Furthermore, an NN observer is introduced to estimate the translational and angular velocities of the UAV, and an output feedback control law is developed in which only the position and the attitude of the UAV are considered measurable. It is shown using Lyapunov theory that the position, orientation, and velocity tracking errors, the virtual control and observer estimation errors, and the NN weight estimation errors for each NN are all semiglobally uniformly ultimately bounded (SGUUB) in the presence of bounded disturbances and NN functional reconstruction errors while simultaneously relaxing the separation principle. The effectiveness of proposed output feedback control scheme is then demonstrated in the presence of unknown nonlinear dynamics and disturbances, and simulation results are included to demonstrate the theoretical conjecture.

500 citations


Journal ArticleDOI
TL;DR: The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm, and the improved memory and time efficiencies are especially true for large sized patterns training.
Abstract: The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm. Quasi-Hessian matrix and gradient vector are computed directly, without Jacobian matrix multiplication and storage. The memory limitation problem for LM training is solved. Considering the symmetry of quasi-Hessian matrix, only elements in its upper/lower triangular array need to be calculated. Therefore, training speed is improved significantly, not only because of the smaller array stored in memory, but also the reduced operations in quasi-Hessian matrix calculation. The improved memory and time efficiencies are especially true for large sized patterns training.

495 citations


Journal ArticleDOI
TL;DR: By constructing a novel Lyapunov-like matrix functional, the idea of delay fractioning is applied to deal with the addressed synchronization analysis problem and several delay-dependent sufficient conditions are obtained which ensure the asymptotic synchronization in the mean square sense for the discrete-time stochastic complex networks with time delays.
Abstract: In this paper, the problem of stochastic synchronization analysis is investigated for a new array of coupled discrete-time stochastic complex networks with randomly occurred nonlinearities (RONs) and time delays. The discrete-time complex networks under consideration are subject to: (1) stochastic nonlinearities that occur according to the Bernoulli distributed white noise sequences; (2) stochastic disturbances that enter the coupling term, the delayed coupling term as well as the overall network; and (3) time delays that include both the discrete and distributed ones. Note that the newly introduced RONs and the multiple stochastic disturbances can better reflect the dynamical behaviors of coupled complex networks whose information transmission process is affected by a noisy environment (e.g., Internet-based control systems). By constructing a novel Lyapunov-like matrix functional, the idea of delay fractioning is applied to deal with the addressed synchronization analysis problem. By employing a combination of the linear matrix inequality (LMI) techniques, the free-weighting matrix method and stochastic analysis theories, several delay-dependent sufficient conditions are obtained which ensure the asymptotic synchronization in the mean square sense for the discrete-time stochastic complex networks with time delays. The criteria derived are characterized in terms of LMIs whose solution can be solved by utilizing the standard numerical software. A simulation example is presented to show the effectiveness and applicability of the proposed results.

495 citations


Journal ArticleDOI
TL;DR: New delay-dependent stability criteria for RNNs with time-varying delay are derived by applying this weighting-delay method, which are less conservative than previous results.
Abstract: In this paper, a weighting-delay-based method is developed for the study of the stability problem of a class of recurrent neural networks (RNNs) with time-varying delay. Different from previous results, the delay interval [0, d(t)] is divided into some variable subintervals by employing weighting delays. Thus, new delay-dependent stability criteria for RNNs with time-varying delay are derived by applying this weighting-delay method, which are less conservative than previous results. The proposed stability criteria depend on the positions of weighting delays in the interval [0, d(t)], which can be denoted by the weighting-delay parameters. Different weighting-delay parameters lead to different stability margins for a given system. Thus, a solution based on optimization methods is further given to calculate the optimal weighting-delay parameters. Several examples are provided to verify the effectiveness of the proposed criteria.

374 citations


Journal ArticleDOI
TL;DR: This paper treats tracking as a learning problem of estimating the location and the scale of an object given its previous location, scale, as well as current and previous image frames, and introduces multiple path ways in CNN to better fuse local and global information.
Abstract: In this paper, we treat tracking as a learning problem of estimating the location and the scale of an object given its previous location, scale, as well as current and previous image frames. Given a set of examples, we train convolutional neural networks (CNNs) to perform the above estimation task. Different from other learning methods, the CNNs learn both spatial and temporal features jointly from image pairs of two adjacent frames. We introduce multiple path ways in CNN to better fuse local and global information. A creative shift-variant CNN architecture is designed so as to alleviate the drift problem when the distracting objects are similar to the target in cluttered environment. Furthermore, we employ CNNs to estimate the scale through the accurate localization of some key points. These techniques are object-independent so that the proposed method can be applied to track other types of object. The capability of the tracker of handling complex situations is demonstrated in many testing sequences.

346 citations


Journal ArticleDOI
TL;DR: This work proposes a novel discriminative semi-supervised feature selection method based on the idea of manifold regularization that selects features through maximizing the classification margin between different classes and simultaneously exploiting the geometry of the probability distribution that generates both labeled and unlabeled data.
Abstract: Feature selection has attracted a huge amount of interest in both research and application communities of data mining. We consider the problem of semi-supervised feature selection, where we are given a small amount of labeled examples and a large amount of unlabeled examples. Since a small number of labeled samples are usually insufficient for identifying the relevant features, the critical problem arising from semi-supervised feature selection is how to take advantage of the information underneath the unlabeled data. To address this problem, we propose a novel discriminative semi-supervised feature selection method based on the idea of manifold regularization. The proposed approach selects features through maximizing the classification margin between different classes and simultaneously exploiting the geometry of the probability distribution that generates both labeled and unlabeled data. In comparison with previous semi-supervised feature selection algorithms, our proposed semi-supervised feature selection method is an embedded feature selection method and is able to find more discriminative features. We formulate the proposed feature selection method into a convex-concave optimization problem, where the saddle point corresponds to the optimal solution. To find the optimal solution, the level method, a fairly recent optimization method, is employed. We also present a theoretic proof of the convergence rate for the application of the level method to our problem. Empirical evaluation on several benchmark data sets demonstrates the effectiveness of the proposed semi-supervised feature selection method.

346 citations


Journal ArticleDOI
TL;DR: A neural-network-based adaptive approach is proposed for the leader-following control of multiagent systems that takes uncertainty in the agent's dynamics into account; the leader's state could be time-varying; and the proposed algorithm for each following agent is only dependent on the information of its neighbor agents.
Abstract: A neural-network-based adaptive approach is proposed for the leader-following control of multiagent systems. The neural network is used to approximate the agent's uncertain dynamics, and the approximation error and external disturbances are counteracted by employing the robust signal. When there is no control input constraint, it can be proved that all the following agents can track the leader's time-varying state with the tracking error as small as desired. Compared with the related work in the literature, the uncertainty in the agent's dynamics is taken into account; the leader's state could be time-varying; and the proposed algorithm for each following agent is only dependent on the information of its neighbor agents. Finally, the satisfactory performance of the proposed method is illustrated by simulation examples.

308 citations


Journal ArticleDOI
TL;DR: By constructing a novel Lyapunov-Krasovskii functional, and using some new approaches and techniques, several novel sufficient conditions are obtained to ensure the exponential stability of the trivial solution in the mean square.
Abstract: This paper is concerned with the problem of exponential stability for a class of Markovian jump impulsive stochastic Cohen-Grossberg neural networks with mixed time delays and known or unknown parameters. The jumping parameters are determined by a continuous-time, discrete-state Markov chain, and the mixed time delays under consideration comprise both time-varying delays and continuously distributed delays. To the best of the authors' knowledge, till now, the exponential stability problem for this class of generalized neural networks has not yet been solved since continuously distributed delays are considered in this paper. The main objective of this paper is to fill this gap. By constructing a novel Lyapunov-Krasovskii functional, and using some new approaches and techniques, several novel sufficient conditions are obtained to ensure the exponential stability of the trivial solution in the mean square. The results presented in this paper generalize and improve many known results. Finally, two numerical examples and their simulations are given to show the effectiveness of the theoretical results.

Journal ArticleDOI
TL;DR: RobustICA's capabilities in processing real-world data involving noncircular complex strongly super-Gaussian sources are illustrated by the biomedical problem of atrial activity (AA) extraction in atrial fibrillation (AF) electrocardiograms (ECGs), where it outperforms an alternative ICA-based technique.
Abstract: Independent component analysis (ICA) aims at decomposing an observed random vector into statistically independent variables Deflation-based implementations, such as the popular one-unit FastICA algorithm and its variants, extract the independent components one after another A novel method for deflationary ICA, referred to as RobustICA, is put forward in this paper This simple technique consists of performing exact line search optimization of the kurtosis contrast function The step size leading to the global maximum of the contrast along the search direction is found among the roots of a fourth-degree polynomial This polynomial rooting can be performed algebraically, and thus at low cost, at each iteration Among other practical benefits, RobustICA can avoid prewhitening and deals with real- and complex-valued mixtures of possibly noncircular sources alike The absence of prewhitening improves asymptotic performance The algorithm is robust to local extrema and shows a very high convergence speed in terms of the computational cost required to reach a given source extraction quality, particularly for short data records These features are demonstrated by a comparative numerical analysis on synthetic data RobustICA's capabilities in processing real-world data involving noncircular complex strongly super-Gaussian sources are illustrated by the biomedical problem of atrial activity (AA) extraction in atrial fibrillation (AF) electrocardiograms (ECGs), where it outperforms an alternative ICA-based technique

Journal ArticleDOI
TL;DR: Empirical study on three real-world databases shows that PNMF can achieve the best or close to the best in clustering and run more efficiently than the compared NMF methods, especially for high-dimensional data.
Abstract: A variant of nonnegative matrix factorization (NMF) which was proposed earlier is analyzed here. It is called projective nonnegative matrix factorization (PNMF). The new method approximately factorizes a projection matrix, minimizing the reconstruction error, into a positive low-rank matrix and its transpose. The dissimilarity between the original data matrix and its approximation can be measured by the Frobenius matrix norm or the modified Kullback-Leibler divergence. Both measures are minimized by multiplicative update rules, whose convergence is proven for the first time. Enforcing orthonormality to the basic objective is shown to lead to an even more efficient update rule, which is also readily extended to nonlinear cases. The formulation of the PNMF objective is shown to be connected to a variety of existing NMF methods and clustering approaches. In addition, the derivation using Lagrangian multipliers reveals the relation between reconstruction and sparseness. For kernel principal component analysis (PCA) with the binary constraint, useful in graph partitioning problems, the nonlinear kernel PNMF provides a good approximation which outperforms an existing discretization approach. Empirical study on three real-world databases shows that PNMF can achieve the best or close to the best in clustering. The proposed algorithm runs more efficiently than the compared NMF methods, especially for high-dimensional data. Moreover, contrary to the basic NMF, the trained projection matrix can be readily used for newly coming samples and demonstrates good generalization.

Journal ArticleDOI
TL;DR: The fact that there is no constant equilibrium point other than the origin for the reaction-diffusion neural networks with Dirichlet boundary conditions under the impulsive control is pointed out.
Abstract: This paper discuss the global exponential stability and synchronization of the delayed reaction-diffusion neural networks with Dirichlet boundary conditions under the impulsive control in terms of p-norm and point out the fact that there is no constant equilibrium point other than the origin for the reaction-diffusion neural networks with Dirichlet boundary conditions. Some new and useful conditions dependent on the diffusion coefficients are obtained to guarantee the global exponential stability and synchronization of the addressed neural networks under the impulsive controllers we assumed. Finally, some numerical examples are given to demonstrate the effectiveness of the proposed control methods.

Journal ArticleDOI
TL;DR: The (non-probabilistic) error analysis justifies a “clustered Nyström method” that uses the k-means clustering centers as landmark points and can be applied to scale up a wide variety of algorithms that depend on the eigenvalue decomposition of kernel matrix.
Abstract: Kernel (or similarity) matrix plays a key role in many machine learning algorithms such as kernel methods, manifold learning, and dimension reduction. However, the cost of storing and manipulating the complete kernel matrix makes it infeasible for large problems. The Nystrom method is a popular sampling-based low-rank approximation scheme for reducing the computational burdens in handling large kernel matrices. In this paper, we analyze how the approximating quality of the Nystrom method depends on the choice of landmark points, and in particular the encoding powers of the landmark points in summarizing the data. Our (non-probabilistic) error analysis justifies a “clustered Nystrom method” that uses the k-means clustering centers as landmark points. Our algorithm can be applied to scale up a wide variety of algorithms that depend on the eigenvalue decomposition of kernel matrix (or its variant), such as kernel principal component analysis, Laplacian eigenmap, spectral clustering, as well as those involving kernel matrix inverse such as least-squares support vector machine and Gaussian process regression. Extensive experiments demonstrate the competitive performance of our algorithm in both accuracy and efficiency.

Journal ArticleDOI
TL;DR: The proposed FWNN models are obtained from the traditional Takagi-Sugeno-Kang fuzzy system by replacing the THEN part of fuzzy rules with wavelet basis functions that have the ability to localize both in time and frequency domains.
Abstract: This paper presents fuzzy wavelet neural network (FWNN) models for prediction and identification of nonlinear dynamical systems. The proposed FWNN models are obtained from the traditional Takagi-Sugeno-Kang fuzzy system by replacing the THEN part of fuzzy rules with wavelet basis functions that have the ability to localize both in time and frequency domains. The first and last model use summation and multiplication of dilated and translated versions of single-dimensional wavelet basis functions, respectively, and in the second model, THEN parts of the rules consist of radial function of wavelets. Gaussian type of activation functions are used in IF part of the fuzzy rules. A fast gradient-based training algorithm, i.e., the Broyden-Fletcher-Goldfarb-Shanno method, is used to find the optimal values for unknown parameters of the FWNN models. Simulation examples are also given to compare the effectiveness of the models with the other known methods in the literature. According to simulation results, we see that the proposed FWNN models have impressive generalization ability.

Journal ArticleDOI
TL;DR: RAMOBoost adaptively ranks minority class instances at each learning iteration according to a sampling probability distribution that is based on the underlying data distribution, and can adaptively shift the decision boundary toward difficult-to-learn minority and majority class instances by using a hypothesis assessment procedure.
Abstract: In recent years, learning from imbalanced data has attracted growing attention from both academia and industry due to the explosive growth of applications that use and produce imbalanced data. However, because of the complex characteristics of imbalanced data, many real-world solutions struggle to provide robust efficiency in learning-based applications. In an effort to address this problem, this paper presents Ranked Minority Oversampling in Boosting (RAMOBoost), which is a RAMO technique based on the idea of adaptive synthetic data generation in an ensemble learning system. Briefly, RAMOBoost adaptively ranks minority class instances at each learning iteration according to a sampling probability distribution that is based on the underlying data distribution, and can adaptively shift the decision boundary toward difficult-to-learn minority and majority class instances by using a hypothesis assessment procedure. Simulation analysis on 19 real-world datasets assessed over various metrics-including overall accuracy, precision, recall, F-measure, G-mean, and receiver operation characteristic analysis-is used to illustrate the effectiveness of this method.

Journal ArticleDOI
TL;DR: By combining a novel Lyapunov-Krasovskii functional, which benefits from the delay partitioning method, with the free-weighting matrix technique, sufficient conditions are proposed to guarantee the exponential stability for the switched neural networks with constant and time-varying delays, respectively.
Abstract: This paper is concerned with the problem of exponential stability analysis of continuous-time switched delayed neural networks. By using the average dwell time approach together with the piecewise Lyapunov function technique and by combining a novel Lyapunov-Krasovskii functional, which benefits from the delay partitioning method, with the free-weighting matrix technique, sufficient conditions are proposed to guarantee the exponential stability for the switched neural networks with constant and time-varying delays, respectively. Moreover, the decay estimates are explicitly given. The results reported in this paper not only depend upon the delay but also depend upon the partitioning, which aims at reducing the conservatism. Numerical examples are presented to demonstrate the usefulness of the derived theoretical results.

Journal ArticleDOI
TL;DR: New delay-dependent passivity criteria are established to guarantee the passivity performance of NNs by constructing a novel Lyapunov functional and utilizing some advanced techniques, which can be efficiently solved via standard numerical software.
Abstract: In this brief, the problem of passivity analysis is investigated for a class of uncertain neural networks (NNs) with both discrete and distributed time-varying delays. By constructing a novel Lyapunov functional and utilizing some advanced techniques, new delay-dependent passivity criteria are established to guarantee the passivity performance of NNs. Essentially different from the available results, when estimating the upper bound of the derivative of Lyapunov functionals, we consider and best utilize the additional useful terms about the distributed delays, which leads to less conservative results. These criteria are expressed in the form of convex optimization problems, which can be efficiently solved via standard numerical software. Numerical examples are provided to illustrate the effectiveness and less conservatism of the proposed results.

Journal ArticleDOI
TL;DR: New delay-dependent stability criteria are presented by constructing a novel Lyapunov-Krasovskii functional, which can guarantee the new stability conditions to be less conservative than those in the literature.
Abstract: This brief addresses the stability analysis problem for stochastic neural networks (SNNs) with discrete interval and distributed time-varying delays. The interval time-varying delay is assumed to satisfy 0 < d1 ? d(t) ? d2 and is described as d(t) = d 1+h(t) with 0 ? h(t) ? d 2 - d 1. Based on the idea of partitioning the lower bound d 1, new delay-dependent stability criteria are presented by constructing a novel Lyapunov-Krasovskii functional, which can guarantee the new stability conditions to be less conservative than those in the literature. The obtained results are formulated in the form of linear matrix inequalities (LMIs). Numerical examples are provided to illustrate the effectiveness and less conservatism of the developed results.

Journal ArticleDOI
TL;DR: A sufficient condition is obtained to ensure that n-neuron recurrent neural networks can have (4k-1)n equilibrium points and (2k)n of them are locally exponentially stable, which improves and extends the existing stability results in the literature.
Abstract: In this brief, stability of multiple equilibria of recurrent neural networks with time-varying delays and the piecewise linear activation function is studied. A sufficient condition is obtained to ensure that n-neuron recurrent neural networks can have (4k-1)n equilibrium points and (2k)n of them are locally exponentially stable. This condition improves and extends the existing stability results in the literature. Simulation results are also discussed in one illustrative example.

Journal ArticleDOI
TL;DR: A synaptic weight association training (SWAT) algorithm for spiking neural networks (SNNs) that merges the Bienenstock-Cooper-Munro (BCM) learning rule with spike timing dependent plasticity (STDP) and yields a unimodal weight distribution.
Abstract: This paper presents a synaptic weight association training (SWAT) algorithm for spiking neural networks (SNNs). SWAT merges the Bienenstock-Cooper-Munro (BCM) learning rule with spike timing dependent plasticity (STDP). The STDP/BCM rule yields a unimodal weight distribution where the height of the plasticity window associated with STDP is modulated causing stability after a period of training. The SNN uses a single training neuron in the training phase where data associated with all classes is passed to this neuron. The rule then maps weights to the classifying output neurons to reflect similarities in the data across the classes. The SNN also includes both excitatory and inhibitory facilitating synapses which create a frequency routing capability allowing the information presented to the network to be routed to different hidden layer neurons. A variable neuron threshold level simulates the refractory period. SWAT is initially benchmarked against the nonlinearly separable Iris and Wisconsin Breast Cancer datasets. Results presented show that the proposed training algorithm exhibits a convergence accuracy of 95.5% and 96.2% for the Iris and Wisconsin training sets, respectively, and 95.3% and 96.7% for the testing sets, noise experiments show that SWAT has a good generalization capability. SWAT is also benchmarked using an isolated digit automatic speech recognition (ASR) system where a subset of the TI46 speech corpus is used. Results show that with SWAT as the classifier, the ASR system provides an accuracy of 98.875% for training and 95.25% for testing.

Journal ArticleDOI
TL;DR: The method introduced in this paper allows for training arbitrarily connected neural networks, therefore, more powerful neural network architectures with connections across layers can be efficiently trained.
Abstract: The method introduced in this paper allows for training arbitrarily connected neural networks, therefore, more powerful neural network architectures with connections across layers can be efficiently trained. The proposed method also simplifies neural network training, by using the forward-only computation instead of the traditionally used forward and backward computation.

Journal ArticleDOI
TL;DR: To achieve uniformly ultimate boundedness of all the signals in the closed-loop system and tracking performance, control gains are effectively modified as a dynamic form with a class of even function, which makes stability analysis be carried out at the present of multiple time-varying delays.
Abstract: This paper presents adaptive neural tracking control for a class of non-affine pure-feedback systems with multiple unknown state time-varying delays. To overcome the design difficulty from non-affine structure of pure-feedback system, mean value theorem is exploited to deduce affine appearance of state variables as virtual controls , and of the actual control . The separation technique is introduced to decompose unknown functions of all time-varying delayed states into a series of continuous functions of each delayed state. The novel Lyapunov-Krasovskii functionals are employed to compensate for the unknown functions of current delayed state, which is effectively free from any restriction on unknown time-delay functions and overcomes the circular construction of controller caused by the neural approximation of a function of and . Novel continuous functions are introduced to overcome the design difficulty deduced from the use of one adaptive parameter. To achieve uniformly ultimate boundedness of all the signals in the closed-loop system and tracking performance, control gains are effectively modified as a dynamic form with a class of even function, which makes stability analysis be carried out at the present of multiple time-varying delays. Simulation studies are provided to demonstrate the effectiveness of the proposed scheme.

Journal ArticleDOI
TL;DR: Some of the theoretical results on monotone neural networks with positive weights with negative weights are clarified, issues that are sometimes misunderstood in the neural network literature.
Abstract: In many classification and prediction problems it is known that the response variable depends on certain explanatory variables. Monotone neural networks can be used as powerful tools to build monotone models with better accuracy and lower variance compared to ordinary nonmonotone models. Monotonicity is usually obtained by putting constraints on the parameters of the network. In this paper, we will clarify some of the theoretical results on monotone neural networks with positive weights, issues that are sometimes misunderstood in the neural network literature. Furthermore, we will generalize some of the results obtained by Sill for the so-called min-max networks to the case of partially monotone problems. The method is illustrated in practical case studies.

Journal ArticleDOI
TL;DR: This brief addresses the problem of designing adaptive neural network tracking control for a class of strict-feedback systems with unknown time-varying disturbances of known periods which nonlinearly appear in unknown functions.
Abstract: This brief addresses the problem of designing adaptive neural network tracking control for a class of strict-feedback systems with unknown time-varying disturbances of known periods which nonlinearly appear in unknown functions. Multilayer neural network (MNN) and Fourier series expansion (FSE) are combined into a novel approximator to model each uncertainty in systems. Dynamic surface control (DSC) approach and integral-type Lyapunov function (ILF) technique are combined to design the control algorithm. The ultimate uniform boundedness of all closed-loop signals is guaranteed. The tracking error is proved to converge to a small residual set around the origin. Two simulation examples are provided to illustrate the feasibility of control scheme proposed in this brief.

Journal ArticleDOI
TL;DR: A Pareto-based multiobjective optimization methodology based on a memetic evolutionary algorithm based on the NSGA2 evolutionary algorithm (MPENSGA2), which is applied to solve 17 classification benchmark problems obtained from the University of California at Irvine repository and one complex real classification problem.
Abstract: This paper proposes a multiclassification algorithm using multilayer perceptron neural network models. It tries to boost two conflicting main objectives of multiclassifiers: a high correct classification rate level and a high classification rate for each class. This last objective is not usually optimized in classification, but is considered here given the need to obtain high precision in each class in real problems. To solve this machine learning problem, we use a Pareto-based multiobjective optimization methodology based on a memetic evolutionary algorithm. We consider a memetic Pareto evolutionary approach based on the NSGA2 evolutionary algorithm (MPENSGA2). Once the Pareto front is built, two strategies or automatic individual selection are used: the best model in accuracy and the best model in sensitivity (extremes in the Pareto front). These methodologies are applied to solve 17 classification benchmark problems obtained from the University of California at Irvine (UCI) repository and one complex real classification problem. The models obtained show high accuracy and a high classification rate for each class.

Journal ArticleDOI
TL;DR: This paper investigates exponential synchronization of coupled networks with hybrid coupling, which is composed of constant coupling and discrete-delay coupling, and finds that the synchronized state will vary in comparison with the conventional synchronized solution.
Abstract: This paper investigates exponential synchronization of coupled networks with hybrid coupling, which is composed of constant coupling and discrete-delay coupling. There is only one transmittal delay in the delayed coupling. The fact is that in the signal transmission process, the time delay affects only the variable that is being transmitted from one system to another, then it makes sense to assume that there is only one single delay contributing to the dynamics. Some sufficient conditions for synchronization are derived based on Lyapunov functional and linear matrix inequality (LMI). In particular, the coupling matrix may be asymmetric or nondiagonal. Moreover, the transmittal delay can be different from the one in the isolated system. A distinctive feature of this work is that the synchronized state will vary in comparison with the conventional synchronized solution. Especially, the degree of the nodes and the inner delayed coupling matrix heavily influence the synchronized state. Finally, a chaotic neural network is used as the node in two regular networks to show the effectiveness of the proposed criteria.

Journal ArticleDOI
TL;DR: A new approach to address the problem of objective image quality estimation, with the use of singular vectors out of singular value decomposition (SVD) as features for quantifying major structural information in images and then support vector regression (SVR) for automatic prediction of image quality.
Abstract: Objective image quality estimation is useful in many visual processing systems, and is difficult to perform in line with the human perception. The challenge lies in formulating effective features and fusing them into a single number to predict the quality score. In this brief, we propose a new approach to address the problem, with the use of singular vectors out of singular value decomposition (SVD) as features for quantifying major structural information in images and then support vector regression (SVR) for automatic prediction of image quality. The feature selection with singular vectors is novel and general for gauging structural changes in images as a good representative of visual quality variations. The use of SVR exploits the advantages of machine learning with the ability to learn complex data patterns for an effective and generalized mapping of features into a desired score, in contrast with the oft-utilized feature pooling process in the existing image quality estimators; this is to overcome the difficulty of model parameter determination for such a system to emulate the related, complex human visual system (HVS) characteristics. Experiments conducted with three independent databases confirm the effectiveness of the proposed system in predicting image quality with better alignment with the HVS's perception than the relevant existing work. The tests with untrained distortions and databases further demonstrate the robustness of the system and the importance of the feature selection.

Journal ArticleDOI
TL;DR: It is demonstrated that mRVMs can produce state-of-the-art results on multiclass discrimination problems and this is achieved by utilizing only a very small fraction of the available observation data.
Abstract: In this paper, we investigate the sparsity and recognition capabilities of two approximate Bayesian classification algorithms, the multiclass multi-kernel relevance vector machines (mRVMs) that have been recently proposed. We provide an insight into the behavior of the mRVM models by performing a wide experimentation on a large range of real-world datasets. Furthermore, we monitor various model fitting characteristics that identify the predictive nature of the proposed methods and compare against existing classification techniques. By introducing novel convergence measures, sample selection strategies and model improvements, it is demonstrated that mRVMs can produce state-of-the-art results on multiclass discrimination problems. In addition, this is achieved by utilizing only a very small fraction of the available observation data.