Finite-time robust control of robot manipulator: a SDDRE based approach
TL;DR: A finite-time robust control law is proposed for a nonlinear, uncertain robot manipulator to ensure the stability analytically and numerically in the presence of bounded uncertainty.
Abstract: This paper proposes a finite-time robust control law for a nonlinear, uncertain robot manipulator. Load variations and unmodeled system dynamics of manipulator are the primary sources of uncertainties. The dynamics of the manipulator is modeled in State-Dependent Coefficient (SDC) form to consider all nonlinear term in system dynamics. To control such uncertain system a robust control law is essential. An optimal control approach is adopted to design the proposed robust control law. The control input is generated by solving a State-Dependent Differential Riccati Equation (SDDRE) in forward in time. Here the analytical solution of SDDRE is used to compute control law. The designed control law ensures the stability analytically and numerically in the presence of bounded uncertainty.
Citations
More filters
[...]
01 Jan 2009
TL;DR: A transversal view through microfluidics theory and applications, covering different kinds of phenomena, from continuous to multiphase flow, and a vision of two phasemicrofluidic phenomena is given through nonlinear analyses applied to experimental time series.
Abstract: This paper first offers a transversal view through microfluidics theory and applications, starting from a brief overview on microfluidic systems and related theoretical issues, covering different kinds of phenomena, from continuous to multiphase flow. Multidimensional models, from lumped parameters to numerical models and computational solutions, are then considered as preliminary tools for the characterization of spatio-temporal dynamics in microfluidic flows. Following these, experimental approaches through original monitoring opto-electronic interfaces and systems are discussed. Finally, a vision of two phase microfluidic phenomena is given through nonlinear analyses applied to experimental time series.
258 citations
[...]
TL;DR: It is proved that the robust attitude stabilization problem under actuator misalignments and disturbances can be reformulated into the problem of solving the Hamilton-Jacobi-Bellman (HJB) equation, and the computational burden of implementing the controller is significantly reduced.
Abstract: Optimal control techniques and related robust controller extensions have been widely studied for rigid spacecraft, but these methods cannot effectively handle the attitude stabilization problem under actuator misalignments In this paper, a robust optimal controller is proposed for the spacecraft attitude stabilization problem under actuator misalignments It is proved that the robust attitude stabilization problem under actuator misalignments and disturbances can be reformulated into the problem of solving the Hamilton-Jacobi-Bellman (HJB) equation However, numerically solving the HJB equation suffers from the curse of dimensionality By proving its positive definiteness, the value function for Sontag's formula is taken as the substitute for the solution of the HJB equation to analytically construct the robust optimal controller Thus the computational burden of implementing the controller is significantly reduced Simulation results also demonstrate the effectiveness and efficiency of the proposed controller
5 citations
[...]
03 Dec 2022-Proceedings Of The Institution Of Mechanical Engineers, Part I: Journal Of Systems And Control Engineering
TL;DR: In this article , a terminal sliding mode control is introduced to control a class of nonlinear uncertain systems in finite time, where the sliding surface of the introduced controller is equipped with a finite-time gain that finishes the control task in the desired predefined time.
Abstract: A novel terminal sliding mode control is introduced to control a class of nonlinear uncertain systems in finite time. Having command on the definition of the final time as an input control parameter is the goal of this work. Terminal sliding mode control is naturally a finite-time controller though the time cannot be set as input, and the convergence time is not exactly known to the user before execution of the control loop. The sliding surface of the introduced controller is equipped with a finite-time gain that finishes the control task in the desired predefined time. The gain is found by partitioning the state-dependent differential Riccati equation gain, then arranging the sub-blocks in a symmetric positive-definite structure. The state-dependent differential Riccati equation is a nonlinear optimal controller with a final boundary condition that penalizes the states at the final time. This guides the states to the desired condition by imposing extra force on the input control law. Here the gain is removed from standard state-dependent differential Riccati equation control law (partitioned and made symmetric positive-definite) and inserted into the nonlinear sliding surface to present a novel finite-time terminal sliding mode control. The stability of the proposed terminal sliding mode control is guaranteed by the definition of the adaptive gain of terminal sliding mode control, which is limited by the Lyapunov stability condition. The proposed approach was validated and compared with state-dependent differential Riccati equation and conventional terminal sliding mode control as independent controllers, applied on a van der Pol oscillator. The capability of the proposed approach of controlling complex systems was checked by simulating a flapping-wing flying robot. The flapping-wing flying robot possesses a highly nonlinear model with uncertainty and disturbance caused by flapping. The flight assumptions also limit the input law significantly. The proposed terminal sliding mode control successfully controlled the illustrative example and flapping-wing flying robot model and has been compared with state-dependent differential Riccati equation and conventional terminal sliding mode control.
2 citations
Dissertation•
[...]
01 Jan 2018
TL;DR: This paper aims to demonstrate the efforts towards in-situ applicability of EMMARM, as to provide real-time information about the response of the immune system to natural disasters.
Abstract: Faculty of Engineering and IT Intelligent Mechatronic Systems Group Doctor of Philosophy
1 citations
Cites background from "Finite-time robust control of robot..."
[...]
References
More filters
Book•
[...]
TL;DR: In this article, the authors present results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrate their importance in a variety of applications, such as linear algebra and matrix theory.
Abstract: Linear algebra and matrix theory are fundamental tools in mathematical and physical science, as well as fertile fields for research. This new edition of the acclaimed text presents results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrates their importance in a variety of applications. The authors have thoroughly revised, updated, and expanded on the first edition. The book opens with an extended summary of useful concepts and facts and includes numerous new topics and features, such as: - New sections on the singular value and CS decompositions - New applications of the Jordan canonical form - A new section on the Weyr canonical form - Expanded treatments of inverse problems and of block matrices - A central role for the Von Neumann trace theorem - A new appendix with a modern list of canonical forms for a pair of Hermitian matrices and for a symmetric-skew symmetric pair - Expanded index with more than 3,500 entries for easy reference - More than 1,100 problems and exercises, many with hints, to reinforce understanding and develop auxiliary themes such as finite-dimensional quantum systems, the compound and adjugate matrices, and the Loewner ellipsoid - A new appendix provides a collection of problem-solving hints.
23,959 citations
[...]
TL;DR: This book discusses Classical and Modern Control Optimization Optimal Control Historical Tour, Variational Calculus for Discrete-Time Systems, and more.
Abstract: INTRODUCTION Classical and Modern Control Optimization Optimal Control Historical Tour About This Book Chapter Overview Problems CALCULUS OF VARIATIONS AND OPTIMAL CONTROL Basic Concepts Optimum of a Function and a Functional The Basic Variational Problem The Second Variation Extrema of Functions with Conditions Extrema of Functionals with Conditions Variational Approach to Optimal Systems Summary of Variational Approach Problems LINEAR QUADRATIC OPTIMAL CONTROL SYSTEMS I Problem Formulation Finite-Time Linear Quadratic Regulator Analytical Solution to the Matrix Differential Riccati Equation Infinite-Time LQR System I Infinite-Time LQR System II Problems LINEAR QUADRATIC OPTIMAL CONTROL SYSTEMS II Linear Quadratic Tracking System: Finite-Time Case LQT System: Infinite-Time Case Fixed-End-Point Regulator System Frequency-Domain Interpretation Problems DISCRETE-TIME OPTIMAL CONTROL SYSTEMS Variational Calculus for Discrete-Time Systems Discrete-Time Optimal Control Systems Discrete-Time Linear State Regulator Systems Steady-State Regulator System Discrete-Time Linear Quadratic Tracking System Frequency-Domain Interpretation Problems PONTRYAGIN MINIMUM PRINCIPLE Constrained Systems Pontryagin Minimum Principle Dynamic Programming The Hamilton-Jacobi-Bellman Equation LQR System using H-J-B Equation CONSTRAINED OPTIMAL CONTROL SYSTEMS Constrained Optimal Control TOC of a Double Integral System Fuel-Optimal Control Systems Minimum Fuel System: LTI System Energy-Optimal Control Systems Optimal Control Systems with State Constraints Problems APPENDICES Vectors and Matrices State Space Analysis MATLAB Files REFERENCES INDEX
1,256 citations
"Finite-time robust control of robot..." refers background or methods in this paper
[...]
[...]
[...]
[...]
[...]
TL;DR: This work describes mathematical formulations for reinforcement learning and a practical implementation method known as adaptive dynamic programming that give insight into the design of controllers for man-made engineered systems that both learn and exhibit optimal behavior.
Abstract: Living organisms learn by acting on their environment, observing the resulting reward stimulus, and adjusting their actions accordingly to improve the reward. This action-based or reinforcement learning can capture notions of optimal behavior occurring in natural systems. We describe mathematical formulations for reinforcement learning and a practical implementation method known as adaptive dynamic programming. These give us insight into the design of controllers for man-made engineered systems that both learn and exhibit optimal behavior.
902 citations
Book•
[...]
TL;DR: This thoroughly up-to-date Second Edition of Robot Manipulator Control explicates theoretical and mathematical requisites for controls design and summarizes current techniques in computer simulation and implementation of controllers.
Abstract: Robot Manipulator Control offers a complete survey of control systems for serial-link robot arms and acknowledges how robotic device performance hinges upon a well-developed control system. Containing over 750 essential equations, this thoroughly up-to-date Second Edition, the book explicates theoretical and mathematical requisites for controls design and summarizes current techniques in computer simulation and implementation of controllers. It also addresses procedures and issues in computed-torque, robust, adaptive, neural network, and force control. New chapters relay practical information on commercial robot manipulators and devices and cutting-edge methods in neural network control.
862 citations
"Finite-time robust control of robot..." refers background in this paper
[...]
[...]
[...]
[...]
TL;DR: In this article, the authors describe the use of reinforcement learning to design feedback controllers for discrete and continuous-time dynamical systems that combine features of adaptive control and optimal control, which are not usually designed to be optimal in the sense of minimizing user-prescribed performance functions.
Abstract: This article describes the use of principles of reinforcement learning to design feedback controllers for discrete- and continuous-time dynamical systems that combine features of adaptive control and optimal control. Adaptive control [1], [2] and optimal control [3] represent different philosophies for designing feedback controllers. Optimal controllers are normally designed of ine by solving Hamilton JacobiBellman (HJB) equations, for example, the Riccati equation, using complete knowledge of the system dynamics. Determining optimal control policies for nonlinear systems requires the offline solution of nonlinear HJB equations, which are often difficult or impossible to solve. By contrast, adaptive controllers learn online to control unknown systems using data measured in real time along the system trajectories. Adaptive controllers are not usually designed to be optimal in the sense of minimizing user-prescribed performance functions. Indirect adaptive controllers use system identification techniques to first identify the system parameters and then use the obtained model to solve optimal design equations [1]. Adaptive controllers may satisfy certain inverse optimality conditions [4].
618 citations