Performance analysis of the generalised projection identification for time-varying systems
read more
Citations
A filtering based multi-innovation extended stochastic gradient algorithm for multivariable control systems
Hierarchical Parameter Estimation for the Frequency Response Based on the Dynamical Window Data
The least squares based iterative algorithms for parameter estimation of a bilinear system with autoregressive noise using the data filtering technique
The parameter estimation algorithms based on the dynamical response measurement data
A hierarchical least squares identification algorithm for Hammerstein nonlinear systems using the key term separation
References
A New Approach to Stability Analysis and Stabilization of Discrete-Time T-S Fuzzy Time-Varying Delay Systems
A Novel Approach to Filter Design for T–S Fuzzy Discrete-Time Systems With Time-Varying Delay
Brief paper: Robust mixed H2/H∞ control of networked control systems with random time delays in both forward and backward communication links
Performance analysis of multi-innovation gradient type identification methods
Model Approximation for Discrete-Time State-Delay Systems in the T–S Fuzzy Framework
Related Papers (5)
The damping iterative parameter identification method for dynamical systems based on the sine signal measurement
Frequently Asked Questions (17)
Q2. What future works have the authors mentioned in the paper "Performance analysis of the generalized projection identification for time-varying systems" ?
The proposed method in the paper can be extended to study identification problems of other time-varying or time-invariant scalar or multivariable systems [ ?, 28–31 ].
Q3. What is the disadvantage of the projection algorithm?
From the perspective of decreasing computational complexity, the projection algorithm is sensitive to noise and the stochastic gradient (SG) algorithm is not capable of tracking the time-varying parameters [23].
Q4. What is the disadvantage of the forgetting factor recursive least squares algorithm?
Although the forgetting factor recursive least squares algorithm can estimate the time-varying parameter vector ϑ(t), its computational load is heavy due to computing the covariance matrix [15,23].
Q5. What is the PEE upper bound for deterministic time-varying systems?
for deterministic time-varying systems, i.e., the observation noise v(t) ≡ 0, as λ → 0, the PEE upper bound [i.e., the expression on the right-hand side of (3)] is bounded.
Q6. what is the recursive least squares algorithm?
If the authors take R(t) := P−1(t) andP−1(t) = q−1∑i=0ψ(t− i)ψT(t− i) (10)= P−1(t− 1) + ψ(t)ψT(t)−ψ(t− q)ψT(t− q), (11)then Equations (4) and (11) form the finite data window recursive least squares (FDW-RLS) algorithm:ϑ̂(t) = ϑ̂(t− 1) + P (t)ψ(t)[y(t)−ψT(t)ϑ̂(t− 1)], (12) P−1(t) = P−1(t− 1) + ψ(t)ψT(t)−ψ(t− q)ψT(t− q), P (0) = p0In, (13)where q is the length of the data window.3.
Q7. What is the noise-to-signal ratio in the example?
In simulation, the input {u(t)} is taken as an uncorrelated stochastic sequence with zero mean and unit variance and {v(t)} as a white noise sequence with zero mean and variance σ2v = 0.202, the noise-to-signal ratio is δns = 24.51%.
Q8. What is the simplest explanation of the PEE upper bound?
For practical identification problem, the task first is collecting the input-output data {u(t), y(t)} and uses them to construct the information vector ψ(t), and then Remark x A longer window permits better performance of the GPJ algorithm for slowly time varying systems.
Q9. What is the value of r(t) in the FFSG algorithm?
If the information vector ψ(t) has the lower and upper bounds with 0 < α 6 ‖ψ(t)‖2 6 β, then r(t) in the FFSG algorithm (15)–(16) satisfiesα
Q10. What is the generalized projection algorithm?
The generalized projection algorithm has better stability performance for a larger data window length and its performance is superior to the projection algorithm and stochastic gradient algorithm.
Q11. What is the main purpose of this paper?
This paper considers the identification algorithm and its performance analysis for time-varying systems [14,15],A(t, z)y(t) = B(t, z)u(t) + v(t), (1)where {u(t)} and {y(t)} are the input and output sequences of the system, respectively, {v(t)} is a stochastic noise sequence with zero mean, and z−1 is a unit backward shift operator: z−1y(t) = y(t−1), A(t, z) and B(t, z) are time-varying coefficient polynomials in z−1, andA(t, z) := 1 + a1(t)z−1 + a2(t)z−2 + · · ·+ ana(t)z−na , ∗ Corresponding authorE-mail: fding@jiangnan.edu.cn (F. Ding)B(t, z) := b1(t)z−1 + b2(t)z−2 + · · ·+ bnb(t)z−nb .
Q12. what is the parameter estimation error vector?
If the conditions in Lemma 1 hold, then the parameter estimation error vector ϑ̂(t)−ϑ(t) given by the generalized projection algorithm is mean square bounded, i.e.,E[‖ϑ̂(t)− ϑ(t)‖2 6 ρb ts cE[‖ϑ̂(0)− ϑ(0)‖2 + k1 (q + s)σ 2 v(q − s + 1)2 + k2(q + s)σ 2 w,where k1 = 2ns3(s + 1)2β3/α4, k2 = 2n2s2(s + 1)2β2/α2.Proof
Q13. What is the corollary of the following study?
Let q = t in (22), i.e., r(t) = r(t− 1) + ‖ψ(t)‖2, the authors havelim t→∞ E[‖ϑ̂(t)− ϑ‖2] 6 lim t→∞k1 (t + s)σ2v(t− s + 1)2 = 0.Thus, for the time invariant stochastic systems, the parameter estimates given by the stochastic gradient algorithm converges to the true parameters – see Theorem 2.Corollary 2
Q14. What is the main argument for the paper?
On the basis of the work in [24], this paper combines the advantages of the projection algorithm and the SG algorithm to present a generalized projection identification algorithm for time-varying systems, and studies the convergence performance of the proposed algorithm by using the stochastic process theory.
Q15. What is the parameter estimation error vector?
From Theorem 1, the authors can see that the identification algorithms encounter difficulties for fast-changingparameter systems because the fast-changing parameters have large variance σ2w and the parameter estimation error upper bound become large.
Q16. What is the simplest way to estimate the parameters of the SG algorithm?
when the generalized projection algorithm works at the beginning of operation, the authors choose a smaller data window length and then a large data window length as the time passes.
Q17. what is the eigenvalue of the matrix?
Rn be the unit eigenvector corresponding to the maximum eigenvalue ρ(t) of the matrix ΦT(t + s, t)Φ(t + s, t), i.e., ΦT(t + s, t)Φ(t + s, t)ζ0 = ρ(t)ζ0.