scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A recurrent neural network for solving Sylvester equation with time-varying coefficients

01 Sep 2002-IEEE Transactions on Neural Networks (IEEE)-Vol. 13, Iss: 5, pp 1053-1063
TL;DR: The recurrent neural network with implicit dynamics is deliberately developed in the way that its trajectory is guaranteed to converge exponentially to the time-varying solution of a given Sylvester equation.
Abstract: Presents a recurrent neural network for solving the Sylvester equation with time-varying coefficient matrices. The recurrent neural network with implicit dynamics is deliberately developed in the way that its trajectory is guaranteed to converge exponentially to the time-varying solution of a given Sylvester equation. Theoretical results of convergence and sensitivity analysis are presented to show the desirable properties of the recurrent neural network. Simulation results of time-varying matrix inversion and online nonlinear output regulation via pole assignment for the ball and beam system and the inverted pendulum on a cart system are also included to demonstrate the effectiveness and performance of the proposed neural network.
Citations
More filters
Journal ArticleDOI
TL;DR: Simulation results substantiate the theoretical analysis and demonstrate the efficacy of the neural model on time-varying matrix inversion, especially when using a power-sigmoid activation function.
Abstract: Following the idea of using first-order time derivatives, this paper presents a general recurrent neural network (RNN) model for online inversion of time-varying matrices. Different kinds of activation functions are investigated to guarantee the global exponential convergence of the neural model to the exact inverse of a given time-varying matrix. The robustness of the proposed neural model is also studied with respect to different activation functions and various implementation errors. Simulation results, including the application to kinematic control of redundant manipulators, substantiate the theoretical analysis and demonstrate the efficacy of the neural model on time-varying matrix inversion, especially when using a power-sigmoid activation function.

466 citations


Cites methods from "A recurrent neural network for solv..."

  • ...Moreover, the gradient-based method also requires much faster convergence in comparison with the time scale of time-varying matrices or imposes very stringent restrictions on design parameters [22]....

    [...]

  • ...Otherwise, the matrix could be singular....

    [...]

Journal ArticleDOI
TL;DR: A model-free control and a control with a restricted model for finite-dimensional complex systems that may be viewed as a contribution to "intelligent" PID controllers, the tuning of which becomes quite straightforward, even with highly nonlinear and/or time-varying systems.

268 citations

Posted Content
TL;DR: In this paper, a model-free control and a control with a restricted model for finite-dimensional complex systems are presented, which can be viewed as a contribution to intelligent PID controllers.
Abstract: We are introducing a model-free control and a control with a restricted model for finite-dimensional complex systems. This control design may be viewed as a contribution to "intelligent" PID controllers, the tuning of which becomes quite straightforward, even with highly nonlinear and/or time-varying systems. Our main tool is a newly developed numerical differentiation. Differential algebra provides the theoretical framework. Our approach is validated by several numerical experiments.

206 citations

Journal ArticleDOI
TL;DR: Based on a new evolution formula, a novel finite-time recurrent neural network is proposed and studied for solving a nonstationary Lyapunov equation and the FTZNN model is successfully applied to online tracking control of a wheeled mobile manipulator.
Abstract: The Lyapunov equation is widely employed in the engineering field to analyze stability of dynamic systems. In this paper, based on a new evolution formula, a novel finite-time recurrent neural network (termed finite-time Zhang neural network, FTZNN) is proposed and studied for solving a nonstationary Lyapunov equation. In comparison with the original Zhang neural network (ZNN) model for a nonstationary Lyapunov equation, the convergence performance has a remarkable improvement for the proposed FTZNN model and can be accelerated to finite time. Besides, by solving the differential inequality, the time upper bound of the FTZNN model is computed theoretically and analytically. Simulations are conducted and compared to validate the superiority of the FTZNN model to the original ZNN model for solving the nonstationary Lyapunov equation. At last, the FTZNN model is successfully applied to online tracking control of a wheeled mobile manipulator.

197 citations


Additional excerpts

  • ...In comparison with the numerical algorithms, RNNs have the potential hardware-implementation ability and the high-speed parallel-distributed processing property in largescale online applications [19]–[22]....

    [...]

Journal ArticleDOI
TL;DR: The global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions and a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing.
Abstract: The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.

192 citations


Cites background from "A recurrent neural network for solv..."

  • ...Interested readers are referred to [21], [32], and [36] for more about the neural architecture of ZNN....

    [...]

References
More filters
Book
Roger A. Horn1
12 Jul 2010
TL;DR: The field of values as discussed by the authors is a generalization of the field of value of matrices and functions, and it includes singular value inequalities, matrix equations and Kronecker products, and Hadamard products.
Abstract: 1. The field of values 2. Stable matrices and inertia 3. Singular value inequalities 4. Matrix equations and Kronecker products 5. Hadamard products 6. Matrices and functions.

7,013 citations

Book ChapterDOI
TL;DR: In this paper, the problem of controlling a fixed nonlinear plant in order to have its output track (or reject) a family of reference (or disturbance) signal produced by some external generator is discussed.
Abstract: The problem of controlling a fixed nonlinear plant in order to have its output track (or reject) a family of reference (or disturbance) signal produced by some external generator is discussed. It is shown that, under standard assumptions, this problem is solvable if and only if a certain nonlinear partial differential equation is solvable. Once a solution of this equation is available, a feedback law which solves the problem can easily be constructed. The theory developed incorporates previously published results established for linear systems. >

1,639 citations


"A recurrent neural network for solv..." refers background in this paper

  • ...As shown in [16] and [17], the key condition for the solution of this problem is the existence of a zero-error manifold for the plant, which can be rephrased as the existence of sufficiently smooth functions and such that , and...

    [...]

Journal ArticleDOI
TL;DR: In this paper, an approximate input-output linearization of nonlinear systems which fail to have a well defined relative degree is studied, and a method for constructing approximate systems that are input output linearizable is provided.
Abstract: Approximate input-output linearization of nonlinear systems which fail to have a well defined relative degree is studied. For such systems, a method for constructing approximate systems that are input-output linearizable is provided. The analysis presented is motivated through its application to a common undergraduate control laboratory experiment-the ball and beam-where it is shown to be more effective for trajectory tracking than the standard Jacobian linearization. >

669 citations


Additional excerpts

  • ...The motion equation is repeated below [19]...

    [...]

Journal ArticleDOI
TL;DR: In this article, it was shown that the pole assignment problem can be reduced to solving the linear matrix equations AX − XA = −BG, FX = G successively for X, and then F for almost any choice of G.

202 citations

Journal ArticleDOI
TL;DR: Various circuit architectures of simple neuron-like analog processors are considered for online solving of a system of linear equations with real constant and/or time-variable coefficients and can be used for solving linear and quadratic programming problems.
Abstract: Various circuit architectures of simple neuron-like analog processors are considered for online solving of a system of linear equations with real constant and/or time-variable coefficients. The proposed circuit structures can be used, after slight modifications, in related problems, namely, inversion and pseudo-inversion of matrices and for solving linear and quadratic programming problems. Various ordinary differential equation formulation schemes (generally nonlinear) and corresponding circuit architectures are investigated to find which are best suited for VLSI implementations. Special emphasis is given to ill-conditioned problems. The properties and performance of the proposed circuit structures are investigated by extensive computer simulations. >

188 citations