The optimal convergence factor of the gradient based iterative algorithm for linear matrix equations
read more
Citations
The accelerated gradient based iterative algorithm for solving a class of generalized Sylvester-transpose matrix equation
Gradient-based iterative algorithm for solving the generalized coupled Sylvester-transpose and conjugate matrix equations over reflexive (anti-reflexive) matrices
Gradient Based Iterative Algorithm to Solve General Coupled DiscreteTime Periodic Matrix Equations over Generalized Reflexive Matrices
Alternating direction method for generalized Sylvester matrix equation AXB + CYD = E
Gradient-based iterative algorithms for generalized coupled Sylvester-conjugate matrix equations
References
Model Reduction for Control System Design
A recurrent neural network for solving Sylvester equation with time-varying coefficients
Gradient based iterative algorithms for solving a class of matrix equations
Iterative least-squares solutions of coupled Sylvester matrix equations
On Iterative Solutions of General Coupled Matrix Equations
Related Papers (5)
Iterative least-squares solutions of coupled Sylvester matrix equations
Gradient based iterative algorithms for solving a class of matrix equations
Frequently Asked Questions (9)
Q2. Why is the convergent rate based on the hierarchical identification principle?
Because the convergent rate relies on the convergent factor µ and the larger the convergent factor µ, the faster the convergent rate of algorithm.
Q3. What is the main purpose of this paper?
Numerical methods for solving matrix equations become interesting as soon as they play an important role in various fields, such as neural network [18], model reduction [14] and image processing [1] etc.
Q4. What is the main conclusion of the paper?
In [17], Xie et al. presented an efficient gradient based iterative algorithms for solving a class of matrix equations by applying the hierarchical identification principle ([2]-[4]) and the convergence properties of the method are investigated.
Q5. What is the main topic of the paper?
iterative approaches for solving matrix equations and recursive identification for parameter estimation have received much attention, e.g., ([6]-[10], [11], [15]-[16], [20]-[22]).
Q6. what is the optimal factor for a convergent matrix?
If the matrix Φ is negative definite and indefinite, then Algorithm 1 will diverge for some initial values, otherwise if 0 < µ < 2(p+q)λmax(Φ) it will converge and in this case the optimal convergent factor should beµoptimal = 2(p + q)λmin(Φ) + λmax(Φ) .
Q7. what is the mn mn square matrix?
For two matrices M and N,M⊗N is their Kronecker product; for an m× n matrix X = [x1, x2, · · · , xn] ∈ Rm×n, xk ∈ Rm, col[X] is an mn−dimensional vector formed by the columns of X, i.e. col[X] = [ xT1 , x T 2 , . . . , x T n ]T ∈ Rmn.Referring to Al Zhour and Kilicman’s work [19], the mn ×mn square matrix is defined byPmn = m∑i=1 n∑ j=1 Ei j ⊗ ETij, (2)where Ei j = eieTj called an elementary matrix of order m × n, and ei(e j) is a column vector with a unity in the ith( jth) positions and zeros elsewhere of order m× 1(n× 1).
Q8. What is the convergent rate for X(k)?
According to Fig. 1 and Fig. 2, it is clear that the larger the convergence factorµ, the faster the convergent rate and when the convergent factor µ is taken to be 0.3961, the convergent rate is the fastest.
Q9. What is the simplest solution to the linear matrix equations?
In this case, the unique solution is given by col[X] = (STS)−1STcol[F], and the corresponding homogeneous matrix equation in (1) with F = 0 has a unique solution X = 0.