scispace - formally typeset
Search or ask a question
Author

Dave Kooijman

Bio: Dave Kooijman is an academic researcher from University of Toronto. The author has contributed to research in topics: Adaptive control & Iterative learning control. The author has an hindex of 2, co-authored 4 publications receiving 11 citations. Previous affiliations of Dave Kooijman include Eindhoven University of Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: Results highlight that the combined adaptive control and iterative learning control framework can achieve high-precision trajectory tracking when unknown and changing disturbances are present and can achieve transfer of learned experience between dynamically different systems.
Abstract: Robust and adaptive control strategies are needed when robots or automated systems are introduced to unknown and dynamic environments where they are required to cope with disturbances, unmodeled dynamics, and parametric uncertainties. In this paper, we demonstrate the capabilities of a combined $\mathcal{L}_1$ adaptive control and iterative learning control (ILC) framework to achieve high-precision trajectory tracking in the presence of unknown and changing disturbances. The $\mathcal{L}_1$ adaptive controller makes the system behave close to a reference model; however, it does not guarantee that perfect trajectory tracking is achieved, while ILC improves trajectory tracking performance based on previous iterations. The combined framework in this paper uses $\mathcal{L}_1$ adaptive control as an underlying controller that achieves a robust and repeatable behavior, while the ILC acts as a high-level adaptation scheme that mainly compensates for systematic tracking errors. We illustrate that this framework enables transfer learning between dynamically different systems, where learned experience of one system can be shown to be beneficial for another different system. Experimental results with two different quadrotors show the superior performance of the combined $\mathcal{L}_1$-ILC framework compared with approaches using ILC with an underlying proportional-derivative controller or proportional-integral-derivative controller. Results highlight that our $\mathcal{L}_1$-ILC framework can achieve high-precision trajectory tracking when unknown and changing disturbances are present and can achieve transfer of learned experience between dynamically different systems. Moreover, our approach is able to achieve precise trajectory tracking in the first attempt when the initial input is generated based on the reference model of the adaptive controller.

8 citations

Proceedings ArticleDOI
01 Jun 2019
TL;DR: In this article, an attitude representation in the Cartesian product of the 2-sphere and the 1-space was proposed to tackle the attitude control problem using a non-linear tracking control law.
Abstract: The control of a quadrotor is typically split into two subsequent problems: finding desired accelerations to control its position, and controlling its attitude and the total thrust to track these accelerations and to track a yaw angle reference. While the thrust vector, generating accelerations, and the angle of rotation about the thrust vector, determining the yaw angle, can be controlled independently, most attitude control strategies in the literature, relying on representations in terms of quaternions, rotation matrices or Euler angles, result in an unnecessary coupling between the control of the thrust vector and of the angle about this vector. This leads, for instance, to undesired position tracking errors due to yaw tracking errors. In this paper we propose to tackle the attitude control problem using an attitude representation in the Cartesian product of the 2-sphere and the 1-sphere, denoted by $\mathcal{S}^{2}\times \mathcal{S}^{1}$ . We propose a non-linear tracking control law on $\mathcal{S}^{2}\times \mathcal{S}^{1}$ that decouples the control of the thrust vector and of the angle of rotation about the thrust vector, and guarantees almost global asymptotic stability. Simulation results highlight the advantages of the proposed approach over previous approaches.
Posted Content
TL;DR: A non-linear tracking control law is proposed that decouples the control of the Thrust vector and of the angle of rotation about the thrust vector, and guarantees almost global asymptotic stability.
Abstract: The control of a quadrotor is typically split into two subsequent problems: finding desired accelerations to control its position, and controlling its attitude and the total thrust to track these accelerations and to track a yaw angle reference. While the thrust vector, generating accelerations, and the angle of rotation about the thrust vector, determining the yaw angle, can be controlled independently, most attitude control strategies in the literature, relying on representations in terms of quaternions, rotation matrices or Euler angles, result in an unnecessary coupling between the control of the thrust vector and of the angle about this vector. This leads, for instance, to undesired position tracking errors due to yaw tracking errors. In this paper we propose to tackle the attitude control problem using an attitude representation in the Cartesian product of the 2-sphere and the 1-sphere, denoted by $\mathcal{S}^2\times \mathcal{S}^1$. We propose a non-linear tracking control law on $\mathcal{S}^2\times \mathcal{S}^1$ that decouples the control of the thrust vector and of the angle of rotation about the thrust vector, and guarantees almost global asymptotic stability. Simulation results highlight the advantages of the proposed approach over previous approaches.

Cited by
More filters
Journal ArticleDOI
25 Jun 2020
TL;DR: A learning algorithm is proposed that enables a quadrotor unmanned aerial vehicle (UAV) to automatically improve its tracking performance by learning from the tracking errors made by other UAVs with different dynamics.
Abstract: Robots are usually programmed for particular tasks with a considerable amount of hand-crafted tuning work Whenever a new robot with different dynamics is presented, the well-designed control algorithms for the robot usually have to be re-tuned to guarantee good performance It remains challenging to directly program a robot to automatically learn from the experiences gathered by other dynamically different robots With such a motivation, this letter proposes a learning algorithm that enables a quadrotor unmanned aerial vehicle (UAV) to automatically improve its tracking performance by learning from the tracking errors made by other UAVs with different dynamics This learning algorithm utilizes the relationship between the dynamics of different UAVs, named the target and training UAVs, respectively The learning signal is generated by the learning algorithm and then added to the feedforward loop of the target UAV, which does not affect the closed-loop stability The learning convergence can be guaranteed by the design of a learning filter With the proposed learning algorithm, the target UAV can improve its tracking performance by learning from the training UAV without baseline controller modifications Both numerical studies and experimental tests are conducted to validate the effectiveness of the proposed learning algorithm

8 citations

Journal ArticleDOI
TL;DR: A new two-optimization design method for the iterative learning control, which is easy to obtain and implement and applies it onto the quadrotor unmanned aerial vehicles (UAVs) trajectory tracking problem.
Abstract: This paper presents a two-step optimization-based design method for iterative learning control and applies it onto the quadrotor unmanned aerial vehicles (UAVs) trajectory tracking problem. Iterative learning control aims to improve the tracking performance through learning from errors over iterations in repetitively operated systems. The tracking errors from previous iterations are injected into a learning filter and a robust filter to generate the learning signal. The design of the two filters usually involves nontrivial tuning work. This paper presents a new two-optimization design method for the iterative learning control, which is easy to obtain and implement. In particular, the learning filter design problem is transferred into a feedback controller design problem for a purposely constructed system, which is solved based on H-infinity optimal control theory thereafter. The robust filter is then obtained by solving an additional optimization to guarantee the learning convergence. Through the proposed design method, the learning performance is optimized and the system's stability is guaranteed. The proposed two-step optimization-based design method and the regarding iterative learning control algorithm are validated by both numerical and experimental studies.

7 citations

Journal ArticleDOI
TL;DR: Model predictive control has an advantage over conventional state feedback and output feedback controllers because it predicts the response of the system, rather than simply reacting to it, and can offer improved performance in the presence of input and output constraints.
Abstract: A study o provides a control scheme that alleviates this reliance on an accurate model by applying an adaptive control augmentation to the abraic model predictive control (AMPC) control law. Adaptive MPC has been investigated previously by incorporating a parameter estimation algorithm to identify the model used by the MPC. The MPC controller is used to compute the optimal reference command, which is then augmented by the adaptive control law. The study also shows that the L1-augmented nonpredictive adaptive control schemes has more time delay as compared with the MPC- L1 scheme.

6 citations