scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Stability of learning control with disturbances and uncertain initial conditions

TL;DR: In this article, the effects of state disturbances, output noise, and errors in initial conditions on a class of learning control algorithms are investigated, and bounds on the asymptotic trajectory errors for the learned input and the corresponding state and output trajectories are obtained.
Abstract: The authors investigate the effects of state disturbances, output noise, and errors in initial conditions on a class of learning control algorithms. They present a simple learning algorithm and exhibit, via a concise proof, bounds on the asymptotic trajectory errors for the learned input and the corresponding state and output trajectories. Furthermore, these bounds are continuous functions of the bounds on the initial condition errors, state disturbances, and output noise, and the bounds are zero in the absence of these disturbances. >
Citations
More filters
Journal ArticleDOI
TL;DR: Though beginning its third decade of active research, the field of ILC shows no sign of slowing down and includes many results and learning algorithms beyond the scope of this survey.
Abstract: This article surveyed the major results in iterative learning control (ILC) analysis and design over the past two decades. Problems in stability, performance, learning transient behavior, and robustness were discussed along with four design techniques that have emerged as among the most popular. The content of this survey was selected to provide the reader with a broad perspective of the important ideas, potential, and limitations of ILC. Indeed, the maturing field of ILC includes many results and learning algorithms beyond the scope of this survey. Though beginning its third decade of active research, the field of ILC shows no sign of slowing down.

2,645 citations


Cites background from "Stability of learning control with ..."

  • ...[120], repeating disturbances [36], [50], [86], [120], and model...

    [...]

  • ...The P-, D-, and PD-type learning functions are arguably the most widely used types of learning functions, particularly for nonlinear systems [5], [12], [39], [81], [83], [86]–[92]....

    [...]

  • ...Although [82]–[86] consider continuous-time systems, parallels for many of the results contained therein can be obtained for discrete-time systems....

    [...]

  • ...and robustness to initial condition variation [36], [50], [85], [86],...

    [...]

  • ...Robustness to initial condition variation is discussed in [82]–[86]....

    [...]

Book ChapterDOI
01 Jan 1999
TL;DR: This chapter gives an overview of the field of iterative learning control (ILC), followed by a detailed description of the ILC technique, followed by two illustrative examples that give a flavor of the nature of ILC algorithms and their performance.
Abstract: In this chapter we give an overview of the field of iterative learning control (ILC). We begin with a detailed description of the ILC technique, followed by two illustrative examples that give a flavor of the nature of ILC algorithms and their performance. This is followed by a topical classification of some of the literature of ILC and a discussion of the connection between ILC and other common control paradigms, including conventional feedback control, optimal control, adaptive control, and intelligent control. Next, we give a summary of the major algorithms, results, and applications of ILC given in the literature. This discussion also considers some emerging research topics in ILC. As an example of some of the new directions in ILC theory, we present some of our recent results that show how ILC can be used to force a desired periodic motion in an initially non-repetitive process: a gas-metal arc welding system. The chapter concludes with summary comments on the past, present, and future of ILC.

397 citations

Journal ArticleDOI
TL;DR: The objectives of this article are to introduce recent development and advances in nonlinear ILC schemes, highlight their effectiveness and limitations, as well as discuss the directions for further exploration of non linear ILC.
Abstract: In this article we review the recent advances in iterative learning control (ILC) for nonlinear dynamic systems. In the research field of ILC, two categories of system nonlinearities are considered, namely, the global Lipschitz continuous (GLC) functions and local Lipschitz continuous (LLC) functions. ILC for GLC systems is widely studied and analysed using contraction mapping approach, and the focus of recent exploration moves to application problems, though a number of theoretical issues remain open. ILC for LLC systems is currently a hot area and the recent research focuses on ILC design and analysis by means of Lyapunov approach. The objectives of this article are to introduce recent development and advances in nonlinear ILC schemes, highlight their effectiveness and limitations, as well as discuss the directions for further exploration of nonlinear ILC.

349 citations


Cites methods from "Stability of learning control with ..."

  • ...This robustness issue has been explored in contraction-mapping based ILC (Heinzinger et al. 1992; Chen, Wen, and Sun 1997) for GLC systems....

    [...]

Journal ArticleDOI
TL;DR: This work presents a discrete-time adaptive iterative learning control scheme to deal with systems with time-varying parametric uncertainties and can incorporate a Recursive Least Squares algorithm, hence the learning gain can be tuned iteratively along the learning axis and pointwisely along the time axis.

275 citations

Journal ArticleDOI
TL;DR: Five different initial conditions are studied, disclose the inherent relationship between each initial condition and corresponding learning convergence (or boundedness) property, and the iterative learning control method under consideration is based on Lyapunov theory.
Abstract: Initial conditions, or initial resetting conditions, play a fundamental role in all kinds of iterative learning control methods. In this note, we study five different initial conditions, disclose the inherent relationship between each initial condition and corresponding learning convergence (or boundedness) property. The iterative learning control method under consideration is based on Lyapunov theory, which is suitable for plants with time-varying parametric uncertainties and local Lipschitz nonlinearities.

216 citations


Cites methods from "Stability of learning control with ..."

  • ...The robustness of contraction based ILC has been studied [5-10] and several algorithms were proposed for ILC without i....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A betterment process for the operation of a mechanical robot in a sense that it betters the nextoperation of a robot by using the previous operation's data is proposed.
Abstract: This article proposes a betterment process for the operation of a mechanical robot in a sense that it betters the next operation of a robot by using the previous operation's data. The process has an iterative learning structure such that the (k + 1)th input to joint actuators consists of the kth input plus an error increment composed of the derivative difference between the kth motion trajectory and the given desired motion trajectory. The convergence of the process to the desired motion trajectory is assured under some reasonable conditions. Numerical results by computer simulation are presented to show the effectiveness of the proposed learning scheme.

3,222 citations

Journal ArticleDOI
01 Feb 1988
TL;DR: An iterative learning technique is applied to robot manipulators, using an inherently nonlinear analysis of the learning procedure to prove the possibility of setting up uniform upper bounds to the trajectory errors occurring at each trial.
Abstract: An iterative learning technique is applied to robot manipulators, using an inherently nonlinear analysis of the learning procedure. In particularly, a 'high-gain feedback' point of view is utilized to prove the possibility of setting up uniform upper bounds to the trajectory errors occurring at each trial. The subsequent analysis of convergence shows that apart from minor conditions, the existence of a finite (but not necessarily narrow) bound on the trajectory deviations can substantially suffice to guarantee the zeroing of the errors after a sufficient number of trials. This in turn leaves open the possibility of obtained the exact tracking of the desired motion, even in the presence of moderate values assigned to the feedback gains. >

371 citations

Proceedings ArticleDOI
J. J. Craig1
01 Jan 1984

256 citations

Journal ArticleDOI
TL;DR: It is shown that the direct transmission term of the plant plays a crucial role in the error convergence of the learning process and a sufficient condition for nonlinear systems to achieve the desired output by the iterative learning control.

220 citations

Proceedings ArticleDOI
07 Apr 1986
TL;DR: An algorithm that uses trajectory following errors to improve a feedforward command to a robot and uses an inverse of the robot model as part of a learning operator which processes the trajectory errors.
Abstract: We present an algorithm that uses trajectory following errors to improve a feedforward command to a robot. This approach to robot learning is based on explicit modeling of the robot; and uses an inverse of the robot model as part of a learning operator which processes the trajectory errors. Results are presented from a successful implementation of this procedure on the MIT Serial Link Direct Drive Arm. The major point of this paper is that more accurate robot models improve trajectory learning performance, and learning algorithms do not reduce the need for good models in robot control.

164 citations