scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Neural Networks in 2008"


Journal ArticleDOI
TL;DR: It is shown that even without a fully optimized design, an MPCA-based gait recognition module achieves highly competitive performance and compares favorably to the state-of-the-art gait recognizers.
Abstract: This paper introduces a multilinear principal component analysis (MPCA) framework for tensor object feature extraction. Objects of interest in many computer vision and pattern recognition applications, such as 2D/3D images and video sequences are naturally described as tensors or multilinear arrays. The proposed framework performs feature extraction by determining a multilinear projection that captures most of the original tensorial input variation. The solution is iterative in nature and it proceeds by decomposing the original problem to a series of multiple projection subproblems. As part of this work, methods for subspace dimensionality determination are proposed and analyzed. It is shown that the MPCA framework discussed in this work supplants existing heterogeneous solutions such as the classical principal component analysis (PCA) and its 2D variant (2D PCA). Finally, a tensor object recognition system is proposed with the introduction of a discriminative tensor feature selection mechanism and a novel classification strategy, and applied to the problem of gait recognition. Results presented here indicate MPCA's utility as a feature extraction tool. It is shown that even without a fully optimized design, an MPCA-based gait recognition module achieves highly competitive performance and compares favorably to the state-of-the-art gait recognizers.

856 citations


Journal ArticleDOI
TL;DR: A novel heuristic structure optimization methodology for radial basis probabilistic neural networks (RBPNNs) is proposed, and experimental results show that the generalization performance of the optimized RBPNN in the plant species identification task was markedly better than that of the optimize RBFNN.
Abstract: In this paper, a novel heuristic structure optimization methodology for radial basis probabilistic neural networks (RBPNNs) is proposed. First, a minimum volume covering hyperspheres (MVCH) algorithm is proposed to select the initial hidden-layer centers of the RBPNN, and then the recursive orthogonal least square algorithm (ROLSA) combined with the particle swarm optimization (PSO) algorithm is adopted to further optimize the initial structure of the RBPNN. The proposed algorithms are evaluated through eight benchmark classification problems and two real-world application problems, a plant species identification task involving 50 plant species and a palmprint recognition task. Experimental results show that our proposed algorithm is feasible and efficient for the structure optimization of the RBPNN. The RBPNN achieves higher recognition rates and better classification efficiency than multilayer perceptron networks (MLPNs) and radial basis function neural networks (RBFNNs) in both tasks. Moreover, the experimental results illustrated that the generalization performance of the optimized RBPNN in the plant species identification task was markedly better than that of the optimized RBFNN.

359 citations


Journal ArticleDOI
TL;DR: The Lyapunov-Krasovskii stability theory for functional differential equations and the linear matrix inequality (LMI) approach are employed and the results are shown to be generalizations of some previously published results and are less conservative than existing results.
Abstract: In this paper, several sufficient conditions are established for the global asymptotic stability of recurrent neural networks with multiple time-varying delays. The Lyapunov-Krasovskii stability theory for functional differential equations and the linear matrix inequality (LMI) approach are employed in our investigation. The results are shown to be generalizations of some previously published results and are less conservative than existing results. The present results are also applied to recurrent neural networks with constant time delays.

337 citations


Journal ArticleDOI
TL;DR: In this letter, the global asymptotical stability analysis problem is considered for a class of Markovian jumping stochastic Cohen-Grossberg neural networks with mixed delays including discrete delays and distributed delays and an alternative delay-dependent stability analysis result is established based on the LMI technique.
Abstract: In this letter, the global asymptotical stability analysis problem is considered for a class of Markovian jumping stochastic Cohen-Grossberg neural networks (CGNNs) with mixed delays including discrete delays and distributed delays. An alternative delay-dependent stability analysis result is established based on the linear matrix inequality (LMI) technique, which can easily be checked by utilizing the numerically efficient Matlab LMI toolbox. Neither system transformation nor free-weight matrix via Newton-Leibniz formula is required. Two numerical examples are included to show the effectiveness of the result.

280 citations


Journal ArticleDOI
TL;DR: The idea is to use an adaptive n-gram model to track the conditional distributions produced by the neural network, and it is shown that a very significant speedup can be obtained on standard problems.
Abstract: Previous work on statistical language modeling has shown that it is possible to train a feedforward neural network to approximate probabilities over sequences of words, resulting in significant error reduction when compared to standard baseline models based on n-grams. However, training the neural network model with the maximum-likelihood criterion requires computations proportional to the number of words in the vocabulary. In this paper, we introduce adaptive importance sampling as a way to accelerate training of the model. The idea is to use an adaptive n-gram model to track the conditional distributions produced by the neural network. We show that a very significant speedup can be obtained on standard problems.

239 citations


Journal ArticleDOI
TL;DR: This volume brings together contributions of top-level researchers in theory of computation, machine learning, and computer vision with the goal of closing up the gaps between disciplines and current state-of-the-art methods for emerging applications.
Abstract: In this excellent book, the editors deal with the state-of-the-art, current best practices, and some innovative applications of nearest-neighbor methods in learning and vision. This volume brings together contributions of top-level researchers in theory of computation, machine learning, and computer vision with the goal of closing up the gaps between disciplines and current state-of-the-art methods for emerging applications. All the content is well-written, highly relevant, original, and timely. The audience for this book consists of researchers, scientists, engineers, professionals, and academics, working not only in this field, but also in any field that could benefit from these powerful methods. This book can be particularly useful to researchers working on the basis set expansion networks.

229 citations


Journal ArticleDOI
TL;DR: A novel localization algorithm, named discriminant-adaptive neural network (DANN), which takes the received signal strength from the access points (APs) as inputs to infer the client position in the wireless local area network (LAN) environment, and shows that the proposed algorithm is much higher in accuracy compared with other examined techniques.
Abstract: This brief paper presents a novel localization algorithm, named discriminant-adaptive neural network (DANN), which takes the received signal strength (RSS) from the access points (APs) as inputs to infer the client position in the wireless local area network (LAN) environment. We extract the useful information into discriminative components (DCs) for network learning. The nonlinear relationship between RSS and the position is then accurately constructed by incrementally inserting the DCs and recursively updating the weightings in the network until no further improvement is required. Our localization system is developed in a real-world wireless LAN WLAN environment, where the realistic RSS measurement is collected. We implement the traditional approaches on the same test bed, including weighted k -nearest neighbor (WKNN), maximum likelihood (ML), and multilayer perceptron (MLP), and compare the results. The experimental results indicate that the proposed algorithm is much higher in accuracy compared with other examined techniques. The improvement can be attributed to that only the useful information is efficiently extracted for positioning while the redundant information is regarded as noise and discarded. Finally, the analysis shows that our network intelligently accomplishes learning while the inserted DCs provide sufficient information.

228 citations


Journal ArticleDOI
TL;DR: In this paper, output feedback adaptive neural network (NN) controls are investigated for two classes of nonlinear discrete-time systems with unknown control directions: nonlinear pure-feedback systems and nonlinear autoregressive moving average with exogenous inputs (NARMAX).
Abstract: In this paper, output feedback adaptive neural network (NN) controls are investigated for two classes of nonlinear discrete-time systems with unknown control directions: (1) nonlinear pure-feedback systems and (2) nonlinear autoregressive moving average with exogenous inputs (NARMAX) systems. To overcome the noncausal problem, which has been known to be a major obstacle in the discrete-time control design, both systems are transformed to a predictor for output feedback control design. Implicit function theorem is used to overcome the difficulty of the nonaffine appearance of the control input. The problem of lacking a priori knowledge on the control directions is solved by using discrete Nussbaum gain. The high-order neural network (HONN) is employed to approximate the unknown control. The closed-loop system achieves semiglobal uniformly-ultimately-bounded (SGUUB) stability and the output tracking error is made within a neighborhood around zero. Simulation results are presented to demonstrate the effectiveness of the proposed control.

224 citations


Journal ArticleDOI
TL;DR: This neural network is capable of solving a large class of quadratic programming problems and is proven to be globally stable and to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints.
Abstract: In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.

219 citations


Journal ArticleDOI
TL;DR: It is shown that the design of the robust state estimator for such neural networks can be achieved by solving a linear matrix inequality (LMI), which can be easily facilitated by using some standard numerical packages.
Abstract: The robust state estimation problem for a class of uncertain neural networks with time-varying delay is studied in this paper. The parameter uncertainties are assumed to be norm bounded. Based on a new bounding technique, a sufficient condition is presented to guarantee the existence of the desired state estimator for the uncertain delayed neural networks. The criterion is dependent on the size of the time-varying delay and on the size of the time derivative of the time-varying delay. It is shown that the design of the robust state estimator for such neural networks can be achieved by solving a linear matrix inequality (LMI), which can be easily facilitated by using some standard numerical packages. Finally, two simulation examples are given to demonstrate the effectiveness of the developed approach.

209 citations


Journal ArticleDOI
TL;DR: A novel neural-dynamics-based approach is proposed for real-time map building and CCN of autonomous mobile robots in a completely unknown environment that is capable of planning more reasonable and shorter collision-free complete coverage paths in unknown environments.
Abstract: Complete coverage navigation (CCN) requires a special type of robot path planning, where the robots should pass every part of the workspace. CCN is an essential issue for cleaning robots and many other robotic applications. When robots work in unknown environments, map building is required for the robots to effectively cover the complete workspace. Real-time concurrent map building and complete coverage robot navigation are desirable for efficient performance in many applications. In this paper, a novel neural-dynamics-based approach is proposed for real-time map building and CCN of autonomous mobile robots in a completely unknown environment. The proposed model is compared with a triangular-cell-map-based complete coverage path planning method (Oh et al., 2004) that combines distance transform path planning, wall-following algorithm, and template-based technique. The proposed method does not need any templates, even in unknown environments. A local map composed of square or rectangular cells is created through the neural dynamics during the CCN with limited sensory information. From the measured sensory information, a map of the robot's immediate limited surroundings is dynamically built for the robot navigation. In addition, square and rectangular cell map representations are proposed for real-time map building and CCN. Comparison studies of the proposed approach with the triangular-cell-map-based complete coverage path planning approach show that the proposed method is capable of planning more reasonable and shorter collision-free complete coverage paths in unknown environments.

Journal ArticleDOI
TL;DR: Using the Kronecker product as an effective tool, a linear matrix inequality (LMI) approach is developed to derive several sufficient criteria ensuring the coupled delayed neural networks to be globally, robustly, exponentially synchronized in the mean square.
Abstract: This paper is concerned with the robust synchronization problem for an array of coupled stochastic discrete-time neural networks with time-varying delay. The individual neural network is subject to parameter uncertainty, stochastic disturbance, and time-varying delay, where the norm-bounded parameter uncertainties exist in both the state and weight matrices, the stochastic disturbance is in the form of a scalar Wiener process, and the time delay enters into the activation function. For the array of coupled neural networks, the constant coupling and delayed coupling are simultaneously considered. We aim to establish easy-to-verify conditions under which the addressed neural networks are synchronized. By using the Kronecker product as an effective tool, a linear matrix inequality (LMI) approach is developed to derive several sufficient criteria ensuring the coupled delayed neural networks to be globally, robustly, exponentially synchronized in the mean square. The LMI-based conditions obtained are dependent not only on the lower bound but also on the upper bound of the time-varying delay, and can be solved efficiently via the Matlab LMI Toolbox. Two numerical examples are given to demonstrate the usefulness of the proposed synchronization scheme.

Journal ArticleDOI
TL;DR: In this paper, adaptive neural network (NN) control is investigated for a class of nonlinear pure-feedback discrete-time systems and implicit function theorem is exploited in the control design and NN is employed to approximate the unknown function in theControl.
Abstract: In this paper, adaptive neural network (NN) control is investigated for a class of nonlinear pure-feedback discrete-time systems. By using prediction functions of future states, the pure-feedback system is transformed into an n-step-ahead predictor, based on which state feedback NN control is synthesized. Next, by investigating the relationship between outputs and states, the system is transformed into an input-output predictor model, and then, output feedback control is constructed. To overcome the difficulty of nonaffine appearance of the control input, implicit function theorem is exploited in the control design and NN is employed to approximate the unknown function in the control. In both state feedback and output feedback control, only a single NN is used and the controller singularity is completely avoided. The closed-loop system achieves semiglobal uniform ultimate boundedness (SGUUB) stability and the output tracking error is made within a neighborhood around zero. Simulation results are presented to show the effectiveness of the proposed control approach.

Journal ArticleDOI
TL;DR: A new criterion of asymptotic stability is derived by introducing a new kind of Lyapunov-Krasovskii functional and is formulated in terms of a linear matrix inequality (LMI), which can be readily solved via standard software.
Abstract: In this brief, the problem of global asymptotic stability for delayed Hopfield neural networks (HNNs) is investigated. A new criterion of asymptotic stability is derived by introducing a new kind of Lyapunov-Krasovskii functional and is formulated in terms of a linear matrix inequality (LMI), which can be readily solved via standard software. This new criterion based on a delay fractioning approach proves to be much less conservative and the conservatism could be notably reduced by thinning the delay fractioning. An example is provided to show the effectiveness and the advantage of the proposed result.

Journal ArticleDOI
TL;DR: In this paper, a neural network is used to approximate the generalized Hamilton-Jacobi-Bellman (GHJB) solution for nonlinear discrete-time (DT) systems.
Abstract: In this paper, we consider the use of nonlinear networks towards obtaining nearly optimal solutions to the control of nonlinear discrete-time (DT) systems. The method is based on least squares successive approximation solution of the generalized Hamilton-Jacobi-Bellman (GHJB) equation which appears in optimization problems. Successive approximation using the GHJB has not been applied for nonlinear DT systems. The proposed recursive method solves the GHJB equation in DT on a well-defined region of attraction. The definition of GHJB, pre-Hamiltonian function, HJB equation, and method of updating the control function for the affine nonlinear DT systems under small perturbation assumption are proposed. A neural network (NN) is used to approximate the GHJB solution. It is shown that the result is a closed-loop control based on an NN that has been tuned a priori in offline mode. Numerical examples show that, for the linear DT system, the updated control laws will converge to the optimal control, and for nonlinear DT systems, the updated control laws will converge to the suboptimal control.

Journal ArticleDOI
TL;DR: A unified framework for generalized LDA is proposed, which elucidates the properties of various algorithms and their relationships, and shows that the matrix computations involved in LDA-based algorithms can be simplified so that the cross-validation procedure for model selection can be performed efficiently.
Abstract: High-dimensional data are common in many domains, and dimensionality reduction is the key to cope with the curse-of-dimensionality. Linear discriminant analysis (LDA) is a well-known method for supervised dimensionality reduction. When dealing with high-dimensional and low sample size data, classical LDA suffers from the singularity problem. Over the years, many algorithms have been developed to overcome this problem, and they have been applied successfully in various applications. However, there is a lack of a systematic study of the commonalities and differences of these algorithms, as well as their intrinsic relationships. In this paper, a unified framework for generalized LDA is proposed, which elucidates the properties of various algorithms and their relationships. Based on the proposed framework, we show that the matrix computations involved in LDA-based algorithms can be simplified so that the cross-validation procedure for model selection can be performed efficiently. We conduct extensive experiments using a collection of high-dimensional data sets, including text documents, face images, gene expression data, and gene expression pattern images, to evaluate the proposed theories and algorithms.

Journal ArticleDOI
TL;DR: In this paper, neural networks are used along with two-player policy iterations to solve for the feedback strategies of a continuous-time zero-sum game that appears in L2-gain optimal control, suboptimal Hinfin control, of nonlinear systems affine in input with the control policy having saturation constraints.
Abstract: In this paper, neural networks are used along with two-player policy iterations to solve for the feedback strategies of a continuous-time zero-sum game that appears in L2-gain optimal control, suboptimal Hinfin control, of nonlinear systems affine in input with the control policy having saturation constraints. The result is a closed-form representation, on a prescribed compact set chosen a priori, of the feedback strategies and the value function that solves the associated Hamilton-Jacobi-Isaacs (HJI) equation. The closed-loop stability, L2-gain disturbance attenuation of the neural network saturated control feedback strategy, and uniform convergence results are proven. Finally, this approach is applied to the rotational/translational actuator (RTAC) nonlinear benchmark problem under actuator saturation, offering guaranteed stability and disturbance attenuation.

Journal ArticleDOI
Wei Wu1, Tianping Chen1
TL;DR: By verifying whether the coupled system with time-varying coupling is globally synchronized, one can verify whether the coupling matrix is not a constant matrix, important and useful for both understanding and interpreting synchronization phenomena and designing coupling configuration.
Abstract: In this paper, global synchronization of linearly coupled neural network (NN) systems with time-varying coupling is investigated. The dynamical behavior of the uncoupled system at each node is general, which can be chaotic or others; the coupling configuration is time varying, i.e., the coupling matrix is not a constant matrix. Based on Lyapunov function method and the specific property of Householder transform, some criteria for the global synchronization are obtained. By these criteria, one can verify whether the coupled system with time-varying coupling is globally synchronized, which is important and useful for both understanding and interpreting synchronization phenomena and designing coupling configuration. Finally, two simulations are given to demonstrate the effectiveness of the theoretical results.

Journal ArticleDOI
TL;DR: Discrete-time versions of the continuous-time genetic regulatory networks with SUM regulatory functions are formulated and studied in this letter to ensure the global exponential stability of the discrete-time GRNs with delays.
Abstract: Discrete-time versions of the continuous-time genetic regulatory networks (GRNs) with SUM regulatory functions are formulated and studied in this letter. Sufficient conditions are derived to ensure the global exponential stability of the discrete-time GRNs with delays. An illustrative example is given to demonstrate the effectiveness of the obtained results.

Journal ArticleDOI
Dongbing Gu1
TL;DR: It is shown that the distributed EM algorithm is a stochastic approximation to the standard EM algorithm and converges to a local maximum of the log-likelihood.
Abstract: This paper presents a distributed expectation-maximization (EM) algorithm over sensor networks. In the E-step of this algorithm, each sensor node independently calculates local sufficient statistics by using local observations. A consensus filter is used to diffuse local sufficient statistics to neighbors and estimate global sufficient statistics in each node. By using this consensus filter, each node can gradually diffuse its local information over the entire network and asymptotically the estimate of global sufficient statistics is obtained. In the M-step of this algorithm, each sensor node uses the estimated global sufficient statistics to update model parameters of the Gaussian mixtures, which can maximize the log-likelihood in the same way as in the standard EM algorithm. Because the consensus filter only requires that each node communicate with its neighbors, the distributed EM algorithm is scalable and robust. It is also shown that the distributed EM algorithm is a stochastic approximation to the standard EM algorithm. Thus, it converges to a local maximum of the log-likelihood. Several simulations of sensor networks are given to verify the proposed algorithm.

Journal ArticleDOI
TL;DR: A novel recurrent neural network for solving a class of convex quadratic programming (QP) problems, in which the quadRatic term in the objective function is the square of the Euclidean norm of the variable, based on which the neural network model is formulated.
Abstract: This paper presents a novel recurrent neural network for solving a class of convex quadratic programming (QP) problems, in which the quadratic term in the objective function is the square of the Euclidean norm of the variable. This special structure leads to a set of simple optimality conditions for the problem, based on which the neural network model is formulated. Compared with existing neural networks for general convex QP, the new model is simpler in structure and easier to implement. The new model can be regarded as an improved version of the dual neural network in the literature. Based on the new model, a simple neural network capable of solving the $k$ -winners-take-all ( $k$ -WTA) problem is formulated. The stability and global convergence of the proposed neural network is proved rigorously and substantiated by simulation results.

Journal ArticleDOI
TL;DR: It is shown that the proposed neural network is stable at a Karush-Kuhn-Tucker point in the sense of Lyapunov and its output trajectory is globally convergent to a minimum solution and there is no restriction on the initial point.
Abstract: This paper presents a novel recurrent neural network for solving nonlinear optimization problems with inequality constraints. Under the condition that the Hessian matrix of the associated Lagrangian function is positive semidefinite, it is shown that the proposed neural network is stable at a Karush-Kuhn-Tucker point in the sense of Lyapunov and its output trajectory is globally convergent to a minimum solution. Compared with variety of the existing projection neural networks, including their extensions and modification, for solving such nonlinearly constrained optimization problems, it is shown that the proposed neural network can solve constrained convex optimization problems and a class of constrained nonconvex optimization problems and there is no restriction on the initial point. Simulation results show the effectiveness of the proposed neural network in solving nonlinearly constrained optimization problems.

Journal ArticleDOI
TL;DR: This paper considers the approximation of sufficiently smooth multivariable functions with a multilayer perceptron (MLP) and obtains results by considering structural properties of the Taylor polynomials of the function in question and of the MLP function.
Abstract: This paper considers the approximation of sufficiently smooth multivariable functions with a multilayer perceptron (MLP). For a given approximation order, explicit formulas for the necessary number of hidden units and its distributions to the hidden layers of the MLP are derived. These formulas depend only on the number of input variables and on the desired approximation order. The concept of approximation order encompasses Kolmogorov-Gabor polynomials or discrete Volterra series, which are widely used in static and dynamic models of nonlinear systems. The results are obtained by considering structural properties of the Taylor polynomials of the function in question and of the MLP function.

Journal ArticleDOI
TL;DR: It is shown that the designed controller can render the closed-loop system asymptotically stable with the help of the changing supplying function idea and is stable in the sense of semiglobal boundedness.
Abstract: In this paper, dynamic output feedback control problem is investigated for a class of nonlinear interconnected systems with time delays. Decentralized observer independent of the time delays is first designed. Then, we employ the bounds information of uncertain interconnections to construct the decentralized output feedback controller via backstepping design method. Based on Lyapunov stability theory, we show that the designed controller can render the closed-loop system asymptotically stable with the help of the changing supplying function idea. Furthermore, the corresponding decentralized control problem is considered under the case that the bounds of uncertain interconnections are not precisely known. By employing the neural network approximation theory, we construct the neural network output feedback controller with corresponding adaptive law. The resulting closed-loop system is stable in the sense of semiglobal boundedness. The observers and controllers constructed in this paper are independent of the time delays. Finally, simulations are done to verify the effectiveness of the theoretic results obtained.

Journal ArticleDOI
TL;DR: The MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring, and the simulation results show the high efficiency of the proposed approach.
Abstract: A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones.

Journal ArticleDOI
TL;DR: A new robust output feedback control approach for flexible-joint electrically driven (FJED) robots via the observer dynamic surface design technique and it is shown that all signals in a closed-loop adaptive system are uniformly ultimately bounded.
Abstract: In this paper, we propose a new robust output feedback control approach for flexible-joint electrically driven (FJED) robots via the observer dynamic surface design technique. The proposed method only requires position measurements of the FJED robots. To estimate the link and actuator velocity information of the FJED robots with model uncertainties, we develop an adaptive observer using self-recurrent wavelet neural networks (SRWNNs). The SRWNNs are used to approximate model uncertainties in both robot (link) dynamics and actuator dynamics, and all their weights are trained online. Based on the designed observer, the link position tracking controller using the estimated states is induced from the dynamic surface design procedure. Therefore, the proposed controller can be designed more simply than the observer backstepping controller. From the Lyapunov stability analysis, it is shown that all signals in a closed-loop adaptive system are uniformly ultimately bounded. Finally, the simulation results on a three-link FJED robot are presented to validate the good position tracking performance and robustness of the proposed control system against payload uncertainties and external disturbances.

Journal ArticleDOI
TL;DR: This paper presents a recursive algorithm for extracting classification rules from feedforward neural networks that have been trained on data sets having both discrete and continuous attributes and shows that for three real-life credit scoring data sets, the algorithm generates rules that are not only more accurate but also more comprehensible than those generated by other NN rule extraction methods.
Abstract: In this paper, we present a recursive algorithm for extracting classification rules from feedforward neural networks (NNs) that have been trained on data sets having both discrete and continuous attributes. The novelty of this algorithm lies in the conditions of the extracted rules: the rule conditions involving discrete attributes are disjoint from those involving continuous attributes. The algorithm starts by first generating rules with discrete attributes only to explain the classification process of the NN. If the accuracy of a rule with only discrete attributes is not satisfactory, the algorithm refines this rule by recursively generating more rules with discrete attributes not already present in the rule condition, or by generating a hyperplane involving only the continuous attributes. We show that for three real-life credit scoring data sets, the algorithm generates rules that are not only more accurate but also more comprehensible than those generated by other NN rule extraction methods.

Journal ArticleDOI
TL;DR: Delays-independent and delay-dependent stability conditions are proposed to ensure the asymptotic stability of the neural network to reduce the conservatism and the effectiveness of the proposed result.
Abstract: This brief is concerned with the stability for static neural networks with time-varying delays. Delay-independent conditions are proposed to ensure the asymptotic stability of the neural network. The delay-independent conditions are less conservative than existing ones. To further reduce the conservatism, delay-dependent conditions are also derived, which can be applied to fast time-varying delays. Expressed in linear matrix inequalities, both delay-independent and delay-dependent stability conditions can be checked using the recently developed algorithms. Examples are provided to illustrate the effectiveness and the reduced conservatism of the proposed result.

Journal ArticleDOI
TL;DR: This paper addresses the change detection aspect leaving the design of just-in-time adaptive classification systems to a companion paper, and two completely automatic tests for detecting nonstationarity phenomena are suggested.
Abstract: The stationarity requirement for the process generating the data is a common assumption in classifiers' design. When such hypothesis does not hold, e.g., in applications affected by aging effects, drifts, deviations, and faults, classifiers must react just in time, i.e., exactly when needed, to track the process evolution. The first step in designing effective just-in-time classifiers requires detection of the temporal instant associated with the process change, and the second one needs an update of the knowledge base used by the classification system to track the process evolution. This paper addresses the change detection aspect leaving the design of just-in-time adaptive classification systems to a companion paper. Two completely automatic tests for detecting nonstationarity phenomena are suggested, which neither require a priori information nor assumptions about the process generating the data. In particular, an effective computational intelligence-inspired test is provided to deal with multidimensional situations, a scenario where traditional change detection methods are generally not applicable or scarcely effective.

Journal ArticleDOI
TL;DR: Two new theorems and the ItÔ calculus show that white Levy noise will benefit subthreshold neuronal signal detection if the noise process's scaled drift velocity falls inside an interval that depends on the threshold values.
Abstract: Levy noise can help neurons detect faint or subthreshold signals. Levy noise extends standard Brownian noise to many types of impulsive jump-noise processes found in real and model neurons as well as in models of finance and other random phenomena. Two new theorems and the ItO calculus show that white Levy noise will benefit subthreshold neuronal signal detection if the noise process's scaled drift velocity falls inside an interval that depends on the threshold values. These results generalize earlier “forbidden interval” theorems of neuronal “stochastic resonance” (SR) or noise-injection benefits. Global and local Lipschitz conditions imply that additive white Levy noise can increase the mutual information or bit count of several feedback neuron models that obey a general stochastic differential equation (SDE). Simulation results show that the same noise benefits still occur for some infinite-variance stable Levy noise processes even though the theorems themselves apply only to finite-variance Levy noise. The Appendix proves the two ItO-theoretic lemmas that underlie the new Levy noise-benefit theorems.