scispace - formally typeset
Search or ask a question
Book ChapterDOI

Reinforcement Learning-Based Controller for Field-Oriented Control of Induction Machine

01 Jan 2019-pp 737-749
TL;DR: In this study, reinforcement learning-based controller has been designed to control the speed of induction machine using filed-oriented control and the controller performance has been verified for various operating conditions by computer simulation in MATLAB/SIMULINK.
Abstract: This paper presents the concept of reinforcement learning-based field-oriented control (FOC) of induction motor. Conventional controllers such as PID used for FOC of induction machines are model-based controllers and face the issue of parameter tuning. Periodic retuning of PID controllers is required to take care of model approximations, parameter variations of the system during operation and external disturbances which are random in character, magnitudes, and place of occurrences in the system. Reinforcement learning is a model-free and online learning technique which can take care of parameter variations. These properties make reinforcement learning a potential candidate, to act as an adaptive controller which can replace conventional controllers. In this study, reinforcement learning-based controller has been designed to control the speed of induction machine using filed-oriented control. The controller performance has been verified for various operating conditions by computer simulation in MATLAB/SIMULINK.
Citations
More filters
TL;DR: In this article , a novel analytical approach for time-response shaping of the PI controller in the filed oriented control (FOC) of the PMSM is presented, which is based on the pre-defined dynamic responses of the motor speed and currents in dq axis.
Abstract: Background and Objectives: Permanent magnet synchronous motors (PMSM) have received much attention due to their high torque as well as low noise values. However, several PI blocks are needed for field, torque, and speed control of the PMSM which complicates controller design in the vector control approach. To cope with these issues, a novel analytical approach for time-response shaping of the Pi controller in the filed oriented control (FOC) of the PMSM is presented in this manuscript. In the proposed method, it is possible to design the controlling loops based on the pre-defined dynamic responses of the motor speed and currents in dq axis. It should be noted that as decoupled model of the motor is employed in the controller development, a closed loop system has a linear model and hence, designed PI controllers are able to stabilize the PMSM in a wide range of operation. Methods: To design the controllers and choose PI gains, characteristic of the closed loop response is formulated analytically. According to pre-defined dynamic responses of the motor speed and currents in dq-axis e.g., desired maximum overshoot and rise-time values, gains of the controllers are calculated analytically. As extracted equation set of the controller tuning includes a nonlinear term, the Newton-Raphson numerical approach is employed for calculation of the nonlinear equation set. In addition, designed system is evaluated under different tests, such as step changes of the references. Finally, it should be noted that as the decoupled models are employed for the PMSM system, hence exact closed loop behavior of the closed loop system can be expressed via a linear model. As a result, stability of the proposed approach can be guaranteed in the whole operational range of the system. Results : Controlling loops of the closed loop system are designed for speed control of the PMSM. To evaluate accuracy and effectiveness of the controllers, it has been simulated using MATLAB/Simulink software. Moreover, the TMS320F28335 digital signal processor (DSP) from Texas Instruments is used for experimental investigation of the controllers. Conclusion: Considering the simulation and practical results, it is shown that the proposed analytical approach is able to select the controlling gains with negligible error. It has shown that the proposed approach for rise time and overshoot calculations has at most 0.01% for step response of the motor speed at 500 rpm.

1 citations

Proceedings ArticleDOI
16 Nov 2022
TL;DR: In this paper , a non-linear feedback controller of an induction motor based on reinforcement learning agent is proposed, which is capable of controlling the outer loop which controls the speed of the induction motor by taking different actions based on the given state.
Abstract: Induction motors are well known in industrial applications due to their low cost and reduced maintenance. Vector control or field-oriented control is the most preferred and reliable control technique in industrial world. Field oriented control is based on conventional controllers such as PID mainly due to their simplest structure and low complexity. However, when they are subjected to external disturbances or internal parameter change, it is quite difficult for these conventional controllers to fulfill the control requirements. The purpose of this research is to design a suitable non-linear feedback controller of an induction motor based on reinforcement learning agent. The proposed controller uses only reference speed and the error (difference) between reference speed and the output as control inputs to produce a torque such that rotor speed tracks the reference speed. Reinforcement learning based speed control algorithm is implemented and overall analysis is carried out of the closed loop system. It is shown that the proposed controller is capable of controlling the outer loop which controls the speed of the induction motor by taking different actions based on the given state. The performance of the proposed control schemes is verified under various operating conditions using simulation results.
Proceedings ArticleDOI
20 Nov 2022
TL;DR: In this paper , the authors show the effective utilization and feasibility of field-oriented control of AC induction motors using PI, PD, and PID controllers without using any physical sensors and using observers and the speed, torque and position in this scheme are estimated, analyzed, and simulated with the help of a motor control block-set in MATLAB/Simulink for a (10HP, squirrel cage induction motor) to a VSI using the SVPWM for effective modulation.
Abstract: With the Field-Oriented Control (FOC) vector approach, the control of the induction motor behaves similar to that of a separately excited dc motor. The torque and flux components in the d-q rotating reference frame can be independently controlled with the help of unit vectors. The main focus of the present work is to show the effective utilization and feasibility of field-oriented control of AC induction motors using PI, PD, and PID controllers without using any physical sensors and using observers. The speed, torque, and position in this scheme are estimated, analyzed, and simulated with the help of a motor control block-set in MATLAB/Simulink for a (10HP, squirrel cage induction motor) to a VSI using the SVPWM for effective modulation.
Proceedings ArticleDOI
20 Nov 2022
TL;DR: In this article , the authors show the effective utilization and feasibility of field-oriented control of AC induction motors using PI, PD, and PID controllers without using any physical sensors and using observers and the speed, torque and position in this scheme are estimated, analyzed, and simulated with the help of a motor control block-set in MATLAB/Simulink for a (10HP, squirrel cage induction motor) to a VSI using the SVPWM for effective modulation.
Abstract: With the Field-Oriented Control (FOC) vector approach, the control of the induction motor behaves similar to that of a separately excited dc motor. The torque and flux components in the d-q rotating reference frame can be independently controlled with the help of unit vectors. The main focus of the present work is to show the effective utilization and feasibility of field-oriented control of AC induction motors using PI, PD, and PID controllers without using any physical sensors and using observers. The speed, torque, and position in this scheme are estimated, analyzed, and simulated with the help of a motor control block-set in MATLAB/Simulink for a (10HP, squirrel cage induction motor) to a VSI using the SVPWM for effective modulation.
Journal ArticleDOI
TL;DR: In this article , a non-linear feedback controller of an induction motor based on reinforcement learning agent is proposed to produce a torque such that rotor speed tracks the reference speed, where the error (difference) between reference speed and the output as control inputs is used to generate a torque, and the performance of the proposed control schemes is verified under various operating conditions using simulation results.
Abstract: Induction motors are well known in industrial applications due to their low cost and reduced maintenance. Vector control or field-oriented control is the most preferred and reliable control technique in industrial world. Field oriented control is based on conventional controllers such as PID mainly due to their simplest structure and low complexity. However, when they are subjected to external disturbances or internal parameter change, it is quite difficult for these conventional controllers to fulfill the control requirements. The purpose of this research is to design a suitable non-linear feedback controller of an induction motor based on reinforcement learning agent. The proposed controller uses only reference speed and the error (difference) between reference speed and the output as control inputs to produce a torque such that rotor speed tracks the reference speed. Reinforcement learning based speed control algorithm is implemented and overall analysis is carried out of the closed loop system. It is shown that the proposed controller is capable of controlling the outer loop which controls the speed of the induction motor by taking different actions based on the given state. The performance of the proposed control schemes is verified under various operating conditions using simulation results.
References
More filters
Book
01 Jan 1988
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

37,989 citations

01 Jan 1989

4,916 citations

Book
01 Jan 2015
TL;DR: In this paper, the authors present a simulation of a six-step Thyristor Inverter with three-level Inverters and three-phase Bridge Invergers. And they present a Neural Network in Identification and Control toolbox.
Abstract: (NOTE: Each chapter begins with an Introduction and concludes with a Summary and References.) Preface. List of Principal Symbols. 1. Power Semiconductor Devices. Diodes. Thyristors. Triacs. Gate Turn-Off Thyristors (GTOs). Bipolar Power or Junction Transistors (BPTs or BJTs). Power MOSFETs. Static Induction Transistors (SITs). Insulated Gate Bipolar Transistors (IGBTs). MOS-Controlled Thyristors (MCTs). Integrated Gate-Commutated Thyristors (IGCTs). Large Band-Gap Materials for Devices. Power Integrated Circuits (PICs). 2. AC Machines for Drives. Induction Machines. Synchronous Machines. Variable Reluctance Machine (VRM). 3. Diodes and Phase-Controlled Converters. Diode Rectifiers. Thyristor Converters. Converter Control. EMI and Line Power Quality Problems. 4. Cycloconverters. Phase-Controlled Cycloconverters. Matrix Converters. High-Frequency Cycloconverters. 5. Voltage-Fed Converters. Single-Phase Inverters. Three-Phase Bridge Inverters. Multi-Stepped Inverters. Pulse Width Modulation Techniques. Three-Level Inverters. Hard Switching Effects. Resonant Inverters. Soft-Switched Inverters. Dynamic and Regenerative Drive Braking. PWM Rectifiers. Static VAR Compensators and Active Harmonic Filters. Introduction to Simulation-MATLAB/SIMULINK. 6. Current-Fed Converters. General Operation of a Six-Step Thyristor Inverter. Load-Commutated Inverters. Force-Commutated Inverters. Harmonic Heating and Torque Pulsation. Multi-Stepped Inverters. Inverters with Self-Commutated Devices. Current-Fed vs Voltage-Fed Converters. 7. Induction Motor Slip-Power Recovery Drives. Doubly-Fed Machine Speed Control by Rotor Rheostat. Static Kramer Drive. Static Scherius Drive. 8. Control and Estimation of Induction Motor Drives. Induction Motor Control with Small Signal Model. Scalar Control. Vector or Field-Oriented Control. Sensorless Vector Control. Direct Torque and Flux Control (DTC). Adaptive Control. Self-Commissioning of Drive. 9. Control and Estimation of Synchronous Motor Drives. Sinusoidal SPM Machine Drives. Synchronous Reluctance Machine Drives. Sinusoidal IPM Machine Drives. Trapezoidal SPM Machine Drives. Wound-Field Synchronous Machine Drives. Sensorless Control. Switched Reluctance Motor (SRM) Drives. 10. Expert System Principles and Applications. Expert System Principles. Expert System Shell. Design Methodology. Applications. Glossary. 11. Fuzzy Logic Principles and Applications. Fuzzy Sets. Fuzzy System. Fuzzy Control. General Design Methodology. Applications. Fuzzy Logic Toolbox. Glossary. 12. Neural Network Principles and Applications. The Structure of a Neuron. Artificial Neural Network. Other Networks. Neural Network in Identification and Control. General Design Methodology. Applications. Neuro-Fuzzy Systems. Demo Program with Neural Network Toolbox. Glossary. Index.

2,836 citations

Book
01 Jan 2005

1,808 citations

Proceedings ArticleDOI
01 May 2017
TL;DR: In this article, a deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots.
Abstract: Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously. Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or manually designed representations.

1,142 citations