Journal ArticleDOI
Iterative learning control for discrete-time systems with exponential rate of convergence
N. Amann,David H. Owens,Eric Rogers +2 more
- Vol. 143, Iss: 2, pp 217-224
Reads0
Chats0
TLDR
An algorithm for iterative learning control is proposed based on an optimisation principle used by other authors to derive gradient-type algorithms and has potential benefits which include realisation in terms of Riccati feedback and feedforward components.Abstract:
An algorithm for iterative learning control is proposed based on an optimisation principle used by other authors to derive gradient-type algorithms. The new algorithm is a descent algorithm and has potential benefits which include realisation in terms of Riccati feedback and feedforward components. This realisation also has the advantage of implicitly ensuring automatic step-size selection and hence guaranteeing convergence without the need for empirical choice of parameters. The algorithm achieves a geometric rate of convergence for invertible plants. One important feature of the proposed algorithm is the dependence of the speed of convergence on weight parameters appearing in the norms of the signals chosen for the optimisation problem.read more
Citations
More filters
Proceedings ArticleDOI
An iterative predictive learning control approach with application to train trajectory tracking
Heqing Sun,Zhongsheng Hou +1 more
TL;DR: Rigorous theoretical analysis confirms that the proposed approach can guarantee the asymptotic convergence of train speed and position to desired profiles along iteration axis.
A New Method Applied for the Determination of Relative Weight Ratios Under the TensorFlow Platform When Estimating Coseismic Slip Distribution
TL;DR: In this paper , the authors proposed a new method for determining the relative weights of multiple observations of jointing inversion slip distributions, which regards the observations and the relative weight ratios as the training data sets and training parameters, respectively; in addition, the constructed Loss $Loss$ function was optimized by the gradient descent method under the TensorFlow platform (GDED).
Posted ContentDOI
HECIL: A Hybrid Error Correction Algorithm for Long Reads with Iterative Learning
TL;DR: The proposed HECIL—Hybrid Error Correction with Iterative Learning—a hybrid error correction framework that determines a correction policy for erroneous long reads, based on optimal combinations of decision weights obtained from short read alignments outperforms state-of-the-art error correction algorithms for an overwhelming majority of evaluation metrics.
Journal ArticleDOI
Data-driven Based Integrated Learning Controller Design for Batch Processes
Li Jia,Luming Cao,Min-Sen Chiu +2 more
TL;DR: A novel integrated learning control system that is very effective to eliminate modeling error and uncertainty, which is superior to traditional ILC and applied to a benchmark batch process shows better stability and robustness compared with the traditional iterative learning control.
References
More filters
Numerical recipes in C
TL;DR: The Diskette v 2.06, 3.5''[1.44M] for IBM PC, PS/2 and compatibles [DOS] Reference Record created on 2004-09-07, modified on 2016-08-08.
Book
Optimization by Vector Space Methods
TL;DR: This book shows engineers how to use optimization theory to solve complex problems with a minimum of mathematics and unifies the large field of optimization with a few geometric principles.
Book
Optimal Control
TL;DR: Reading optimal control frank l lewis solution manual ebook pdf 2019 is extremely useful because you could get enough detailed information in the book technology has.