scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Combined ILC and Disturbance Observer for the Rejection of Near-Repetitive Disturbances, With Application to Excavation

03 Mar 2015-IEEE Transactions on Control Systems and Technology (IEEE)-Vol. 23, Iss: 5, pp 1754-1769
TL;DR: A new control structure for tasks where explicit disturbance compensation is not only critical for overcoming poor feedback performance but is also challenging due to the complexity and nonrepetitive nature of the interaction between the plant and the environment is proposed.
Abstract: This paper proposes a new control structure for tasks where explicit disturbance compensation is not only critical for overcoming poor feedback performance but is also challenging due to the complexity and nonrepetitive nature of the interaction between the plant and the environment. The approach proposed uses a particular form of iterative learning control (ILC) to estimate the previous disturbances, which are used as a preview of the disturbance in the next iteration. A disturbance observer is used to compensate for the difference between the ILC prediction and the true disturbance. The controller is evaluated and compared with a proportional controller, with ILC, and with an observer-based controller in extensive field trials using an automated excavator.
Citations
More filters
Journal ArticleDOI
TL;DR: The Disturbance Observer (DOB) has been one of the most widely used robust control tools since it was proposed by Ohnishi in 1983 as mentioned in this paper, and it has been widely used in robust control applications.
Abstract: Disturbance observer (DOB) has been one of the most widely used robust control tools since it was proposed by Ohnishi in 1983. This paper introduces the origins of DOB and presents a survey of the major results on DOB-based robust control in the last 35 years. Furthermore, it explains DOB's analysis and synthesis techniques for linear and nonlinear systems by using a unified framework. In final section, this paper presents concluding remarks on DOB-based robust control and its engineering applications.

207 citations

Journal ArticleDOI
TL;DR: The proposed ESO-based adaptive controller theoretically achieves an excellent asymptotic tracking performance when time-invariant modeling uncertainties exist and preserves the performance results of both control methods while overcoming their practical performance limitations.
Abstract: Velocity signal is difficult to obtain in practical electrohydraulic servomechanisms Even though it can be approximately derived via numerical differentiation on position measurement, the strong noise effect will greatly deteriorate the achievable control performance Hence, how to design a high-performance tracking controller without velocity measurement is of practical significance In this paper, a practical adaptive tracking controller without velocity measurement is proposed for electrohydraulic servomechanisms To estimate the unmeasurable velocity signal, an extended state observer (ESO) that also provides an estimate of the mismatched disturbance is constructed The ESO uses the unknown parameter estimates updated by a novel adaptive law, which only depends on the actual position and desired trajectory Moreover, the matched parametric uncertainty is also handled by online parameter adaptation and the matched disturbance is suppressed via a robust control law The proposed ESO-based adaptive controller theoretically achieves an excellent asymptotic tracking performance when time-invariant modeling uncertainties exist In the presence of time-variant modeling uncertainties, guaranteed transient performance and prescribed final tracking accuracy can also be achieved The proposed control strategy bridges the gap between the adaptive control and disturbance observer-based control without using the velocity signal and preserves the performance results of both control methods while overcoming their practical performance limitations Comparative experiments are performed on an actual servovalve-controlled double-rod hydraulic actuator to verify the superiority of the proposed control strategy

162 citations


Cites methods from "Combined ILC and Disturbance Observ..."

  • ...In addition, some uncertainty/disturbance observer-based control methods have also been investigated, such as timedelay-estimation-based nonsingular terminal sliding mode control [18], [19], disturbance observer-based backstepping control [20], iterative learning control (ILC) [21], and sliding mode control [22]....

    [...]

Posted Content
TL;DR: The origins of DOB are introduced and it is explained DOB's analysis and synthesis techniques for linear and nonlinear systems by using a unified framework.
Abstract: Disturbance Observer has been one of the most widely used robust control tools since it was proposed in 1983. This paper introduces the origins of Disturbance Observer and presents a survey of the major results on Disturbance Observer-based robust control in the last thirty-five years. Furthermore, it explains the analysis and synthesis techniques of Disturbance Observer-based robust control for linear and nonlinear systems by using a unified framework. In the last section, this paper presents concluding remarks on Disturbance Observer-based robust control and its engineering applications.

86 citations


Cites background from "Combined ILC and Disturbance Observ..."

  • ...trollers, such as nonlinear and iterative learning controllers, have been generally employed in the design of the robust controllers [87], [106], [107]....

    [...]

Journal ArticleDOI
28 Jun 2019
TL;DR: Hardware extensions and modifications for full automation, a mapping approach specifically tailored to excavation, environment collision-free trajectory planning on these maps, an arm controller aware of various limits and an improved state machine that enables the execution on real hardware are presented.
Abstract: This letter shows accurate and autonomous creation of free-form trenches using a walking excavator. We present hardware extensions and modifications for full automation, a mapping approach specifically tailored to excavation, environment collision-free trajectory planning on these maps, an arm controller aware of various limits and an improved state machine that enables the execution on real hardware. Furthermore, previous work about excavation planning and the design of a single soil-independent dig cycle is extended and transferred from simulation to hardware. The entire system is tested on a four-segment, piecewise-planar trench, and a free-form curved trench. Both shapes were successfully excavated with unprecedented accuracy.

52 citations


Cites methods from "Combined ILC and Disturbance Observ..."

  • ...[11] used iterative learning control to predict the disturbance in the next dig cycle....

    [...]

01 Nov 2002
TL;DR: A unified and systematic assessment of ten position control strategies for a hydraulic servo system with single-ended cylinder driven by a proportional directional control valve aimed at identifying those methods that achieve better tracking, have a low sensitivity to system uncertainties, and offer a good balance between development effort and end results.
Abstract: Presents a unified and systematic assessment of ten position control strategies for a hydraulic servo system with single-ended cylinder driven by a proportional directional control valve. We aim at identifying those methods that achieve better tracking, have a low sensitivity to system uncertainties, and offer a good balance between development effort and end results. A formal approach for solving this problem relies on several practical metrics, which is introduced herein. Their choice is important, as the comparison results between controllers can vary significantly, depending on the selected criterion. Apart from the quantitative assessment, we also raise aspects which are difficult to quantify, but which must stay in attention when considering the position control problem for this class of hydraulic servo systems.

46 citations

References
More filters
Book
01 Jan 1991
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.

45,034 citations


"Combined ILC and Disturbance Observ..." refers methods in this paper

  • ...19 shows that the noise is colored so that the capacity of the continuous channel in bits/second can be computed by integrating the SNR over the spectrum [53] as...

    [...]

Book
01 Jan 1991
TL;DR: Covers in a progressive fashion a number of analysis tools and design techniques directly applicable to nonlinear control problems in high performance systems (in aerospace, robotics and automotive areas).
Abstract: Covers in a progressive fashion a number of analysis tools and design techniques directly applicable to nonlinear control problems in high performance systems (in aerospace, robotics and automotive areas).

15,545 citations

Journal ArticleDOI
TL;DR: A betterment process for the operation of a mechanical robot in a sense that it betters the nextoperation of a robot by using the previous operation's data is proposed.
Abstract: This article proposes a betterment process for the operation of a mechanical robot in a sense that it betters the next operation of a robot by using the previous operation's data. The process has an iterative learning structure such that the (k + 1)th input to joint actuators consists of the kth input plus an error increment composed of the derivative difference between the kth motion trajectory and the given desired motion trajectory. The convergence of the process to the desired motion trajectory is assured under some reasonable conditions. Numerical results by computer simulation are presented to show the effectiveness of the proposed learning scheme.

3,222 citations

Book
01 Jan 1975
TL;DR: In this paper, the Bellman-Gronwall Lemma has been applied to the small gain theorem in the context of linear systems and convolutional neural networks, and it has been shown that it can be applied to linear systems.
Abstract: Preface to the Classics edition Preface Acknowledgments Note to the reader List of symbols 1. Memoryless nonlinearities 2. Norms 3. General theorems 4. Linear systems 5. Applications of the small gain theorem 6. Passivity Appendix A. Integrals and series Appendix B. Fourier transforms Appendix C. Convolution Appendix D. Algebras Appendix E. Bellman-Gronwall Lemma References Index.

2,894 citations


"Combined ILC and Disturbance Observ..." refers background in this paper

  • ...The small gain theorem [38] guarantees that if ‖H ‖ < 1, then this loop is stable and if ‖H ‖ 1, it has negligible effect on the system response....

    [...]

Journal ArticleDOI
TL;DR: Though beginning its third decade of active research, the field of ILC shows no sign of slowing down and includes many results and learning algorithms beyond the scope of this survey.
Abstract: This article surveyed the major results in iterative learning control (ILC) analysis and design over the past two decades. Problems in stability, performance, learning transient behavior, and robustness were discussed along with four design techniques that have emerged as among the most popular. The content of this survey was selected to provide the reader with a broad perspective of the important ideas, potential, and limitations of ILC. Indeed, the maturing field of ILC includes many results and learning algorithms beyond the scope of this survey. Though beginning its third decade of active research, the field of ILC shows no sign of slowing down.

2,645 citations


"Combined ILC and Disturbance Observ..." refers background in this paper

  • ...In ILC, there are two common system representations [7]:...

    [...]