scispace - formally typeset
Journal ArticleDOI

Stochastic Genetic Algorithm-Assisted Fuzzy Q -Learning for Robotic Manipulators

Reads0
Chats0
TLDR
Stochastic genetic algorithm as stochastic optimizer for action selection at each stage of Fuzzy Q-Learning-based controller turns out to be a highly effective way for robotic manipulator control rather than choosing an algebraic minimal action.
Abstract
This work proposes stochastic genetic algorithm-assisted Fuzzy Q-Learning-based robotic manipulator control. Specifically, the aim is to redefine the action choosing mechanism in Fuzzy Q-Learning for robotic manipulator control. Conventionally, a Fuzzy Q-Learning-based controller selects a deterministic action from available actions using fuzzy Q values. This deterministic Fuzzy Q-Learning is not an efficient approach, especially in dealing with highly coupled nonlinear systems such as robotic manipulators. Restricting the search for optimal action to the agent’s action set or a restricted set of Q values (deterministic) is a myopic idea. Herein, the proposal is to employ genetic algorithm as stochastic optimizer for action selection at each stage of Fuzzy Q-Learning-based controller. This turns out to be a highly effective way for robotic manipulator control rather than choosing an algebraic minimal action. As case studies, present work implements the proposed approach on two manipulators: (a) two-link arm manipulator and (b) selective compliance assembly robotic arm. Scheme is compared with baseline Fuzzy Q-Learning controller, Lyapunov Markov game-based controller and Linguistic Lyapunov Reinforcement Learning controller. Simulation results show that our stochastic genetic algorithm-assisted Fuzzy Q-Learning controller outperforms the above-mentioned controllers in terms of tracking errors along with lower torque requirements.

read more

Citations
More filters

Load–frequency control : a GA-based multi-agent reinforcement learning

TL;DR: In this article, a multi-agent reinforcement learning (MARL) approach is proposed to solve the load-frequency control (LFC) problem in a distributed multi-area power system, which consists of two agents in each power area; the estimator agent provides the area control error (ACE) signal based on the frequency bias estimation and the controller agent uses reinforcement learning to control the power system.
Journal ArticleDOI

Fuzzy-based metaheuristic algorithm for optimization of fuzzy controller: fault-tolerant control application

TL;DR: The key contribution of this work is the discovery of the best approach for generating an optimal vector of values for the fuzzy controller's membership function optimization, bringing the process value of the two-tank level control process closer to the target process value (set point).
Journal ArticleDOI

Q-Learning-based model predictive variable impedance control for physical human-robot collaboration

TL;DR: In this paper , a Q-learning-based model predictive variable impedance controller (Q-LMPVIC) is proposed to assist the operators in a physical human-robot collaboration (pHRC) tasks.
Journal ArticleDOI

Shadowed Type-2 Fuzzy Sets in Dynamic Parameter Adaption in Cuckoo Search and Flower Pollination Algorithms for Optimal Design of Fuzzy Fault-Tolerant Controllers

TL;DR: In this paper , shadowed type-2 fuzzy inference systems (ST2FISs) are proposed for adjusting the Levy flight (P) and switching probability (P′) parameters in the original cuckoo search (CS) and flower pollination (FP) algorithms.
Journal ArticleDOI

Evolving population method for real-time reinforcement learning

TL;DR: In this paper , the authors proposed an evolutionary population based method to improve the performance of reinforcement learning by optimizing hyperparameters and available actions in a real-time environment with large branching factors.
References
More filters
Proceedings Article

Safe Model-based Reinforcement Learning with Stability Guarantees

TL;DR: In this paper, the authors present a learning algorithm that explicitly considers safety, defined in terms of stability guarantees, and show how to use statistical models of the dynamics to obtain high-performance control policies with provable stability certificates.
Journal ArticleDOI

Model-Free Optimal Tracking Control via Critic-Only Q-Learning

TL;DR: This paper aims to solve the model-free optimal tracking control problem of nonaffine nonlinear discrete-time systems with a critic-only Q-learning (CoQL) method, which avoids solving the tracking Hamilton-Jacobi-Bellman equation.
Journal ArticleDOI

Robot manipulator control using neural networks: A survey

TL;DR: The problem foundation of manipulator control and the theoretical ideas on using neural network to solve this problem are analyzed and then the latest progresses on this topic in recent years are described and reviewed in detail.
Journal ArticleDOI

Admittance-Based Controller Design for Physical Human–Robot Interaction in the Constrained Task Space

TL;DR: It is proved that all states of the closed-loop system are semiglobally uniformly ultimately bounded (SGUUB) by utilizing the Lyapunov stability principles.
Journal ArticleDOI

Neural Network Control of a Two-Link Flexible Robotic Manipulator Using Assumed Mode Method

TL;DR: The n-dimensional discretized model of the two-link flexible manipulator is developed by the assumed mode method (AMM) and both full-state feedback control and output feedback control are investigated to achieve the trajectory tracking and vibration suppression.
Related Papers (5)