scispace - formally typeset
Search or ask a question
Author

Karime Pereida

Other affiliations: University of New South Wales
Bio: Karime Pereida is an academic researcher from University of Toronto. The author has contributed to research in topics: Adaptive control & Trajectory. The author has an hindex of 5, co-authored 16 publications receiving 85 citations. Previous affiliations of Karime Pereida include University of New South Wales.

Papers
More filters
Proceedings ArticleDOI
01 Oct 2018
TL;DR: A novel adaptive model predictive controller that combines model predictive control (MPC) with an underlying $\mathcal{L}_{1}$ adaptive controller to improve trajectory tracking of a system subject to unknown and changing disturbances is proposed.
Abstract: Robots and automated systems are increasingly being introduced to unknown and dynamic environments where they are required to handle disturbances, unmodeled dynamics, and parametric uncertainties. Robust and adaptive control strategies are required to achieve high performance in these dynamic environments. In this paper, we propose a novel adaptive model predictive controller that combines model predictive control (MPC) with an underlying $\mathcal{L}_{1}$ adaptive controller to improve trajectory tracking of a system subject to unknown and changing disturbances. The $\mathcal{L}_{1}$ adaptive controller forces the system to behave in a predefined way, as specified by a reference model. A higher-level model predictive controller then uses this reference model to calculate the optimal reference input based on a cost function, while taking into account input and state constraints. We focus on the experimental validation of the proposed approach and demonstrate its effectiveness in experiments on a quadrotor. We show that the proposed approach has a lower trajectory tracking error compared to non-predictive, adaptive approaches and a predictive, nonadaptive approach, even when external wind disturbances are applied.

37 citations

Proceedings ArticleDOI
01 May 2017
TL;DR: Improved trajectory tracking performance and generalization capabilities of the combined L1 adaptive feedback and iterative learning control framework compared to pure ILC are demonstrated in experiments with a quadrotor subject to unknown, dynamic disturbances.
Abstract: As robots and other automated systems are introduced to unknown and dynamic environments, robust and adaptive control strategies are required to cope with disturbances, unmodeled dynamics and parametric uncertainties. In this paper, we propose and provide theoretical proofs of a combined L 1 adaptive feedback and iterative learning control (ILC) framework to improve trajectory tracking of a system subject to unknown and changing disturbances. The L 1 adaptive controller forces the system to behave in a repeatable, predefined way, even in the presence of unknown and changing disturbances; however, this does not imply that perfect trajectory tracking is achieved. ILC improves the tracking performance based on experience from previous executions. The performance of ILC is limited by the robustness and repeatability of the underlying system, which, in this approach, is handled by the L 1 adaptive controller. In particular, we are able to generalize learned trajectories across different system configurations because the L 1 adaptive controller handles the underlying changes in the system. We demonstrate the improved trajectory tracking performance and generalization capabilities of the combined method compared to pure ILC in experiments with a quadrotor subject to unknown, dynamic disturbances. This is the first work to show L 1 adaptive control combined with ILC in experiment.

19 citations

Journal ArticleDOI
23 Jan 2018
TL;DR: In this paper, a multi-robot, multitask transfer learning framework is proposed for trajectory tracking, where each trajectory represents a different task, since many robotic tasks can be described as a trajectory tracking problem.
Abstract: Transfer learning has the potential to reduce the burden of data collection and to decrease the unavoidable risks of the training phase. In this letter, we introduce a multirobot, multitask transfer learning framework that allows a system to complete a task by learning from a few demonstrations of another task executed on another system. We focus on the trajectory tracking problem where each trajectory represents a different task, since many robotic tasks can be described as a trajectory tracking problem. The proposed multirobot transfer learning framework is based on a combined ${\mathcal{L}_1}$ adaptive control and an iterative learning control approach. The key idea is that the adaptive controller forces dynamically different systems to behave as a specified reference model. The proposed multitask transfer learning framework uses theoretical control results (e.g., the concept of vector relative degree) to learn a map from desired trajectories to the inputs that make the system track these trajectories with high accuracy. This map is used to calculate the inputs for a new, unseen trajectory. Experimental results using two different quadrotor platforms and six different trajectories show that, on average, the proposed framework reduces the first-iteration tracking error by 74% when information from tracking a different single trajectory on a different quadrotor is utilized.

14 citations

Posted Content
TL;DR: A framework for this regime of tasks including two main components: a bi-level motion optimization algorithm for real-time trajectory generation and a learning-based controller optimized for precise tracking of high-speed motions via a learned inverse dynamics model.
Abstract: Mobile manipulators consist of a mobile platform equipped with one or more robot arms and are of interest for a wide array of challenging tasks because of their extended workspace and dexterity. Typically, mobile manipulators are deployed in slow-motion collaborative robot scenarios. In this paper, we consider scenarios where accurate high-speed motions are required. We introduce a framework for this regime of tasks including two main components: (i) a bi-level motion optimization algorithm for real-time trajectory generation, which relies on Sequential Quadratic Programming (SQP) and Quadratic Programming (QP), respectively; and (ii) a learning-based controller optimized for precise tracking of high-speed motions via a learned inverse dynamics model. We evaluate our framework with a mobile manipulator platform through numerous high-speed ball catching experiments, where we show a success rate of 85.33%. To the best of our knowledge, this success rate exceeds the reported performance of existing related systems and sets a new state of the art.

10 citations

Journal ArticleDOI
TL;DR: A multirobot, multitask transfer learning framework that allows a system to complete a task by learning from a few demonstrations of another task executed on another system is introduced.
Abstract: Transfer learning has the potential to reduce the burden of data collection and to decrease the unavoidable risks of the training phase In this letter, we introduce a multirobot, multitask transfer learning framework that allows a system to complete a task by learning from a few demonstrations of another task executed on another system We focus on the trajectory tracking problem where each trajectory represents a different task, since many robotic tasks can be described as a trajectory tracking problem The proposed multirobot transfer learning framework is based on a combined $\mathcal{L}_1$ adaptive control and an iterative learning control approach The key idea is that the adaptive controller forces dynamically different systems to behave as a specified reference model The proposed multitask transfer learning framework uses theoretical control results (eg, the concept of vector relative degree) to learn a map from desired trajectories to the inputs that make the system track these trajectories with high accuracy This map is used to calculate the inputs for a new, unseen trajectory Experimental results using two different quadrotor platforms and six different trajectories show that, on average, the proposed framework reduces the first-iteration tracking error by 74% when information from tracking a different single trajectory on a different quadrotor is utilized

8 citations


Cited by
More filters
Proceedings ArticleDOI
12 Jul 2017
TL;DR: In this paper, the authors present a new method of learning control policies that successfully operate under unknown dynamic models by leveraging a large number of training examples that are generated using a physical simulator.
Abstract: We present a new method of learning control policies that successfully operate under unknown dynamic models. We create such policies by leveraging a large number of training examples that are generated using a physical simulator. Our system is made of two components: a Universal Policy (UP) and a function for Online System Identification (OSI). We describe our control policy as universal because it is trained over a wide array of dynamic models. These variations in the dynamic model may include differences in mass and inertia of the robots' components, variable friction coefficients, or unknown mass of an object to be manipulated. By training the Universal Policy with this variation, the control policy is prepared for a wider array of possible conditions when executed in an unknown environment. The second part of our system uses the recent state and action history of the system to predict the dynamics model parameters mu. The value of mu from the Online System Identification is then provided as input to the control policy (along with the system state). Together, UP-OSI is a robust control policy that can be used across a wide range of dynamic models, and that is also responsive to sudden changes in the environment. We have evaluated the performance of this system on a variety of tasks, including the problem of cart-pole swing-up, the double inverted pendulum, locomotion of a hopper, and block-throwing of a manipulator. UP-OSI is effective at these tasks across a wide range of dynamic models. Moreover, when tested with dynamic models outside of the training range, UP-OSI outperforms the Universal Policy alone, even when UP is given the actual value of the model dynamics. In addition to the benefits of creating more robust controllers, UP-OSI also holds out promise of narrowing the Reality Gap between simulated and real physical systems.

251 citations

Proceedings ArticleDOI
01 May 2020
TL;DR: In this article, an adaptive control framework leveraging the theory of stochastic CLFs and CBFs along with tractable Bayesian model learning via Gaussian Processes or Bayesian neural networks is proposed to guarantee stability and safety while adapting to unknown dynamics with probability 1.
Abstract: Deep learning has enjoyed much recent success, and applying state-of-the-art model learning methods to controls is an exciting prospect. However, there is a strong reluctance to use these methods on safety-critical systems, which have constraints on safety, stability, and real-time performance. We propose a framework which satisfies these constraints while allowing the use of deep neural networks for learning model uncertainties. Central to our method is the use of Bayesian model learning, which provides an avenue for maintaining appropriate degrees of caution in the face of the unknown. In the proposed approach, we develop an adaptive control framework leveraging the theory of stochastic CLFs (Control Lyapunov Functions) and stochastic CBFs (Control Barrier Functions) along with tractable Bayesian model learning via Gaussian Processes or Bayesian neural networks. Under reasonable assumptions, we guarantee stability and safety while adapting to unknown dynamics with probability 1. We demonstrate this architecture for high-speed terrestrial mobility targeting potential applications in safety-critical high-speed Mars rover missions.

80 citations

Journal ArticleDOI
TL;DR: In this article, an analytical formulation for relative dielectric con- stant retrieval is reconstructed to establish a relationship between the response of a spiral microstrip resonator and the effective relative Dielectric constant of a lossy superstrate, such as biological tissue.
Abstract: An analytical formulation for relative dielectric con- stant retrieval is reconstructed to establish a relationship between the response of a spiral microstrip resonator and effective relative dielectric constant of a lossy superstrate, such as biological tissue. To do so, an analytical equation is modified by constructing functions for the two unknowns, filling factor A and the effective length leff of the resonator. This is done by simulating the resonator with digital phantoms of varying permittivity .T he values of A and leff are determined for each phantom from the resulting S-parameter response, using Particle Swarm Op- timization. Multiple non-linear regression is applied to produce equations for A and leff, expressed as a function of frequency and the phantom's relative dielectric constant. These equations are combined to form a new non-linear analytical equation, which is then solved using the Newton-Raphson iterative method, for both simulations and measurements of physical phantoms. To verify the reconstructed dielectric constant, the dielectric properties of the physical phantoms are determined with a commercial high temperature open-ended coaxial probe. The dielectric properties are reconstructed by the described method, with less than 3.67% error with respect to the measurements.

63 citations

Posted Content
TL;DR: A review of the recent advances made in using machine learning to achieve safe decision making under uncertainties, with a focus on unifying the language and frameworks used in control theory and reinforcement learning research can be found in this article.
Abstract: The last half-decade has seen a steep rise in the number of contributions on safe learning methods for real-world robotic deployments from both the control and reinforcement learning communities. This article provides a concise but holistic review of the recent advances made in using machine learning to achieve safe decision making under uncertainties, with a focus on unifying the language and frameworks used in control theory and reinforcement learning research. Our review includes: learning-based control approaches that safely improve performance by learning the uncertain dynamics, reinforcement learning approaches that encourage safety or robustness, and methods that can formally certify the safety of a learned control policy. As data- and learning-based robot control methods continue to gain traction, researchers must understand when and how to best leverage them in real-world scenarios where safety is imperative, such as when operating in close proximity to humans. We highlight some of the open challenges that will drive the field of robot learning in the coming years, and emphasize the need for realistic physics-based benchmarks to facilitate fair comparisons between control and reinforcement learning approaches.

53 citations