Lyapunov fuzzy Markov game controller for two link robotic manipulator
About: This article is published in Journal of Intelligent and Fuzzy Systems.The article was published on 2018-01-01. It has received 5 citations till now. The article focuses on the topics: Control theory & Lyapunov function.
Citations
More filters
••
TL;DR: A network adaptive terminal control method for adaptive motion control of a manipulator based on a neural network is proposed, and the stability analysis of the closed-loop system is realized.
Abstract: With the development of economic science and technology, the development of computer vision has undergone rapid changes, and various products relying on computer vision are also more and more, such as smart home, robot technology, and so on. At present, robot technology has become a very important part of the development of human science and technology, and in the field of industrial robots, the most rapid development is the robot with robot arm adaptive motion. It is very necessary to study the adaptive motion control of the manipulator based on machine learning. The robot with the adaptive motion of the manipulator can carry out logistics express sorting, operate in the doors and windows outside the building, and pick fruits in the orchard, which can ensure the effective implementation of hard work. Therefore, this paper proposes a mechanical adaptive control method based on a neural network. According to the motion model of the manipulator, the RBF neural network model is used to judge the stability of the system according to the Lyapunov function. The related algorithms of machine learning and multi-degree of freedom manipulator are studied and improved. The RBF neural network model approximates the unknown function infinitely and then establishes the complex motion model. Aiming at the adaptive neural network of a manipulator, a network adaptive terminal control method is proposed. Firstly, a stable manipulator motion system is designed by using a neural network, and then the terminal synovial controller is designed by using backstepping control technology. The stability of the method is proved by using the approximation virtual control technology of the neural network. The adaptive control is realized by using the learning and self-adaptability of the neural network; thus, the stability analysis of the closed-loop system is realized.
23 citations
••
TL;DR: Stochastic genetic algorithm as stochastic optimizer for action selection at each stage of Fuzzy Q-Learning-based controller turns out to be a highly effective way for robotic manipulator control rather than choosing an algebraic minimal action.
Abstract: This work proposes stochastic genetic algorithm-assisted Fuzzy Q-Learning-based robotic manipulator control. Specifically, the aim is to redefine the action choosing mechanism in Fuzzy Q-Learning for robotic manipulator control. Conventionally, a Fuzzy Q-Learning-based controller selects a deterministic action from available actions using fuzzy Q values. This deterministic Fuzzy Q-Learning is not an efficient approach, especially in dealing with highly coupled nonlinear systems such as robotic manipulators. Restricting the search for optimal action to the agent’s action set or a restricted set of Q values (deterministic) is a myopic idea. Herein, the proposal is to employ genetic algorithm as stochastic optimizer for action selection at each stage of Fuzzy Q-Learning-based controller. This turns out to be a highly effective way for robotic manipulator control rather than choosing an algebraic minimal action. As case studies, present work implements the proposed approach on two manipulators: (a) two-link arm manipulator and (b) selective compliance assembly robotic arm. Scheme is compared with baseline Fuzzy Q-Learning controller, Lyapunov Markov game-based controller and Linguistic Lyapunov Reinforcement Learning controller. Simulation results show that our stochastic genetic algorithm-assisted Fuzzy Q-Learning controller outperforms the above-mentioned controllers in terms of tracking errors along with lower torque requirements.
5 citations
••
TL;DR: The aim of this special issue is to discuss various aspects and recent advances in the area of intelligent systems technologies and applications as decision support considers the state of art of eye gaze features to understand cognitive processing.
Abstract: The aim of this special issue is to discuss various 10 aspects and recent advances in the area of intelligent 11 systems technologies and applications. The 50 papers 12 were selected after a rigorous peer-review and rec13 ommendation of the guest editors. The special issue 14 covers various topics such as computational intel15 ligence, pattern recognition, machine learning, and 16 their applications to a wide spectrum of domains 17 including healthcare [1–10]; image processing, com18 puter vision, and biometrics [11–18], social media, 19 document analysis, natural language processing, and 20 recommendation systems [19–27]; security, informa21 tion hiding, and safety [28–35]; cloud computing 22 services and management [36–38]; energy, control, 23 industrial, and other applications [39–50]. 24 Intelligent-Based Decision Support System for 25 Diagnosing Glaucoma in Primary Eye Care Centers 26 Using Eye Tracker [1]: It is quite alarming that the 27 increase of glaucoma is due to the lack of awareness 28 of the disease and the cost for glaucoma screening. 29 The primary eye care centers need to include a com30 prehensive glaucoma screening and include machine 31 learning models to enhance screening as decision 32 support system. The proposed system considers the 33
1 citations
Cites background from "Lyapunov fuzzy Markov game controll..."
...In [23], a self-learning adaptive and optimal Lyapunov fuzzy Markov game controller for safe and stable tracking control of twolink robotic manipulators is proposed....
[...]
••
TL;DR: Some equivalent relations are provided to characterize the family of all solutions that admit a potential on weights and several axiomatic results are proposed to present the rationality for the supreme-weighted value.
Abstract: By considering the supreme-utilities among fuzzy sets and the weights among participants simultaneously, we introduce the supreme-weighted value on fuzzy transferable-utility games. Further, we provide some equivalent relations to characterize the family of all solutions that admit a potential on weights. We also propose the dividend approach to provide alternative viewpoint for the potential approach. Based on these equivalent relations, several axiomatic results are also proposed to present the rationality for the supreme-weighted value.
References
More filters
••
TL;DR: This article attempts to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots by highlighting both key challenges in robot reinforcement learning as well as notable successes.
Abstract: Reinforcement learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors. Conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in reinforcement learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this article, we attempt to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots. We highlight both key challenges in robot reinforcement learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our paper lies on the choice between model-based and model-free as well as between value-function-based and policy-search methods. By analyzing a simple problem in some detail we demonstrate how reinforcement learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.
2,391 citations
01 Jan 2012
TL;DR: A survey of work in reinforcement learning for behavior generation in robots can be found in this article, where the authors highlight key challenges in robot reinforcement learning as well as notable successes and discuss the role of algorithms, representations and prior knowledge in achieving these successes.
Abstract: Reinforcement learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors. Conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in reinforcement learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this article, we attempt to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots. We highlight both key challenges in robot reinforcement learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our paper lies on the choice between model-based and model-free as well as between value-function-based and policy-search methods. By analyzing a simple problem in some detail we demonstrate how reinforcement learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.
1,513 citations
••
TL;DR: This work describes mathematical formulations for reinforcement learning and a practical implementation method known as adaptive dynamic programming that give insight into the design of controllers for man-made engineered systems that both learn and exhibit optimal behavior.
Abstract: Living organisms learn by acting on their environment, observing the resulting reward stimulus, and adjusting their actions accordingly to improve the reward. This action-based or reinforcement learning can capture notions of optimal behavior occurring in natural systems. We describe mathematical formulations for reinforcement learning and a practical implementation method known as adaptive dynamic programming. These give us insight into the design of controllers for man-made engineered systems that both learn and exhibit optimal behavior.
1,163 citations
••
01 Mar 2016TL;DR: In this article, an adaptive impedance controller for a robotic manipulator with input saturation was developed by employing neural networks. But the adaptive impedance control was not considered in the tracking control design, and the input saturation is handled by designing an auxiliary system.
Abstract: In this paper, adaptive impedance control is developed for an ${n}$ -link robotic manipulator with input saturation by employing neural networks. Both uncertainties and input saturation are considered in the tracking control design. In order to approximate the system uncertainties, we introduce a radial basis function neural network controller, and the input saturation is handled by designing an auxiliary system. By using Lyapunov’s method, we design adaptive neural impedance controllers. Both state and output feedbacks are constructed. To verify the proposed control, extensive simulations are conducted.
685 citations
••
TL;DR: A neural network (NN) controller is designed to suppress the vibration of a flexible robotic manipulator system with input deadzone and is able to compensate for the estimated deadzone effect and track the desired trajectory.
Abstract: In this paper, a neural network (NN) controller is designed to suppress the vibration of a flexible robotic manipulator system with input deadzone. The NN aims to approximate the unknown robotic manipulator dynamics and eliminate the effects of input deadzone in the actuators. In order to describe the system more accurately, the model of the flexible manipulator is constructed based on the lumping spring-mass method. Full state feedback NN control is proposed first and output feedback NN control with a high-gain observer is then devised to make the proposed control scheme more practical. The effect of input deadzone is approximated by a radial basis function neural network (RBFNN) and the unknown dynamics of the manipulator is approximated by another RBFNN. The proposed NN control is able to compensate for the estimated deadzone effect and track the desired trajectory. For the stability analysis, the Lyapunov's direct method is used to ensure uniform ultimate boundedness (UUB) of the closed-loop system. Simulations are given to verify the control performance of the NN controllers comparing with the proportional derivative (PD) controller. At last, the experiments are conducted on the Quanser platform to further prove the feasibility and control performance of the NN controllers.
319 citations