scispace - formally typeset
Journal ArticleDOI

A Novel Nonlinear Deep Reinforcement Learning Controller for DC–DC Power Buck Converters

TLDR
An intelligent proportional-integral based on sliding mode (SM) observer to mitigate the destructive impedance instabilities of nonideal CPLs with time-varying nature in the ultralocal model sense is addressed.
Abstract
The nonlinearities and unmodeled dynamics inevitably degrade the quality and reliability of power conversion, and as a result, pose big challenges on higher-performance voltage stabilization of dc–dc buck converters. The stability of such power electronic equipment is further threatened when feeding the nonideal constant power loads (CPLs) because of the induced negative impedance specifications. In response to these challenges, the advanced regulatory and technological mechanisms associated with the converters require to be developed to efficiently implement these interface systems in the microgrid configuration. This article addresses an intelligent proportional-integral based on sliding mode (SM) observer to mitigate the destructive impedance instabilities of nonideal CPLs with time-varying nature in the ultralocal model sense. In particular, in the current article, an auxiliary deep deterministic policy gradient (DDPG) controller is adaptively developed to decrease the observer estimation error and further ameliorate the dynamic characteristics of dc–dc buck converters. The design of the DDPG is realized in two parts: (i) an actor-network which generates the policy commands, while (ii) a critic-network evaluates the quality of the policy command generated by the actor. The suggested strategy establishes the DDPG-based control to handle for what the iPI-based SM observer is unable to compensate. In this application, the weight coefficients of the actor and critic networks are trained based on the reward feedback of the voltage error, by using the gradient descent scheme. Finally, to investigate the merits and implementation feasibility of the suggested method, some experimental results on a laboratory prototype of the dc–dc buck converter, which feeds a time-varying CPL, are presented.

read more

Citations
More filters
Journal ArticleDOI

Evaluation of different boosting ensemble machine learning models and novel deep learning and boosting framework for head-cut gully erosion susceptibility.

TL;DR: In this paper, the authors developed head-cut gully erosion prediction maps using boosting ensemble machine learning algorithms, namely Boosted Tree (BT), Boosted Generalized Linear Models (BGLM), Boost Regression Tree (BRT), Extreme Gradient Boosting (XGB), and Deep Boost (DB).
Journal ArticleDOI

Artificial Intelligence-Aided Minimum Reactive Power Control for the DAB Converter Based on Harmonic Analysis Method

TL;DR: The trained agent of the DDPG which likes an implicit function, can provide optimal control strategies for the DAB converter in real-time with the minimum reactive power and soft switching performance in the continuous operation range.
Journal ArticleDOI

Model Predictive Control Based Type-3 Fuzzy Estimator for Voltage Stabilization of DC Power Converters

TL;DR: In this paper , a novel interval type-III fuzzy neural network-based model predictive control (IT3FNNMPC) controller is proposed to mitigate the destructive effects of CPL in dc/dc boost converters implemented in the microgrid and power system distribution.
Journal ArticleDOI

Voltage Regulation of DC-DC Buck Converters Feeding CPLs via Deep Reinforcement Learning

TL;DR: In this article , a model-free deep reinforcement learning (DRL) control strategy is proposed to enhance the bus voltage regulation performance of DC-DC buck converters, which is based on a sub-goal reward/penalty mechanism.
Journal ArticleDOI

Optimum Power Flow in DC Microgrid Employing Bayesian Regularized Deep Neural Network

TL;DR: In this paper , a Bayesian regularized deep neural network (BDNN) power flow control technique was proposed to reduce the loss in Load Expectation (LOLE) from 2.1x10−4 to 7.8×10−8 without a change in loss of supply expectation (LOSE) value.
References
More filters
Posted Content

Continuous control with deep reinforcement learning

TL;DR: This work presents an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces, and demonstrates that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
Journal ArticleDOI

Constant power loads and negative impedance instability in automotive systems: definition, modeling, stability, and control of power electronic converters and motor drives

TL;DR: Sliding-mode and feedback linearization techniques along with large-signal phase plane analysis are presented as methods to analyze, control, and stabilize automotive converters/systems operating with CPLs.
Journal ArticleDOI

Dynamic Behavior and Stabilization of DC Microgrids With Instantaneous Constant-Power Loads

TL;DR: In this article, stability issues in dc microgrids with instantaneous constant-power loads (CPLs) are explored and mitigation strategies such as load shedding, adding resistive loads, filters, or energy storage directly connected to the main bus, and control methods are investigated.
Journal ArticleDOI

New Sliding-Mode Observer for Position Sensorless Control of Permanent-Magnet Synchronous Motor

TL;DR: A novel sliding-mode observer is built according to the back electromotive force model after the back EMF equivalent signal is obtained, and low-pass filter and phase compensation module is eliminated and estimation accuracy is improved.
Journal ArticleDOI

Constant-Power Load System Stabilization by Passive Damping

TL;DR: In this paper, the authors investigate passive damping as a general method to stabilize power systems with CPLs, using a representative system model consisting of a voltage source, an LC filter, and an ideal CPL, and demonstrate that a CPL system can be stabilized by a simple damping circuit added to one of the filter elements.
Related Papers (5)