scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Ağirlik matri̇sleri̇ni̇n 3-sd heli̇kopteri̇n ddrd tabanli kontrol metodu üzeri̇ne etki̇leri̇

01 Sep 2021-Vol. 9, Iss: 3, pp 588-605
About: The article was published on 2021-09-01 and is currently open access. It has received None citations till now.

Content maybe subject to copyright    Report

References
More filters
Journal ArticleDOI
TL;DR: State-Dependent Riccati Equation (SDRE) strategies have emerged as general design methods that provide a systematic and effective means of designing nonlinear controllers, observers, and filters.

462 citations

Journal ArticleDOI
TL;DR: Infinite-time horizon nonlinear optimal control (ITHNOC) presents a viable option for synthesizing stabilizing controllers for nonlinear systems by making a state-input tradeoff, where the objective is to minimize the cost given by a performance index.
Abstract: A EROSPACE engineering applications greatly stimulated the development of optimal control theory during the 1950s and 1960s, where the objective was to drive the system states in such a way that some defined cost was minimized. This turned out to have very useful applications in the design of regulators (where some steady state is to be maintained) and in tracking control strategies (where some predetermined state trajectory is to be followed). Among such applications was the problem of optimal flight trajectories for aircraft and space vehicles. Linear optimal control theory in particular has been very well documented and widely applied, where the plant that is controlled is assumed linear and the feedback controller is constrained to be linear with respect to its input. However, the availability of powerful low-cost microprocessors has spurred great advantages in the theory and applications of nonlinear control. The competitive era of rapid technological change, particularly in aerospace exploration, now demands stringent accuracy and cost requirements in nonlinear control systems. This has motivated the rapid development of nonlinear control theory for application to challenging, complex, dynamical real-world problems, particularly those that bear major practical significance in aerospace, marine, and defense industries. Infinite-time horizon nonlinear optimal control (ITHNOC) presents a viable option for synthesizing stabilizing controllers for nonlinear systems by making a state-input tradeoff, where the objective is to minimize the cost given by a performance index. The original theory of nonlinear optimal control dates from the 1960s. Various theoretical and practical aspects of the problem have been addressed in the literature over the decades since. In particular, the continuous-time nonlinear deterministic optimal control problem associated with autonomous (time-invariant) nonlinear regulator systems that are affine (linear) in the controls has been studied by many authors. The long-established theory of optimal control offers quite mature and well-documented techniques for solving this control-affine nonlinear optimization problem, based on dynamic programming or calculus of variations, but their application is generally a very tedious task. Bellman’s dynamic programming approach reduces to solving a nonlinear first-order partial differential equation (PDE), expressed by the Hamilton–Jacobi–Bellman (HJB) equation. The solution to the HJB equation gives the optimal performance/cost value (or storage) function and determines an optimal control in feedback form under some smoothness assumptions. Alternatively, in the classical calculus of variations, optimal control problems can be characterized locally in terms of the Hamiltonian dynamics arising from Pontryagin’s minimum principle. These are the characteristic equations of the HJB PDE, which result in a nonlinear, constrained two-point boundary value problem (TPBVP) that, in general, can only be solved by successive approximation of the optimal control input using iterative numerical techniques for each set of initial conditions. Numerically, even though the nonlinear TPBVP is somewhat easier to solve than the HJBPDE, control signals can only be determined offline and are thus best suited for feedforward control of plants for which the state trajectories are known a priori. Therefore, contrary to the dynamic programming approach, the resultant control law is not generally in feedback form. Open-loop control, however, is sensitive to random disturbances and requires that the initial state be on the optimal trajectory. In contrast, nonlinear optimal feedback has inherent robustness properties (inherent in the sense that it is obtained by ignoring uncertainty and disturbances). The potential difficulty with the HJB approach is that no efficient algorithm is available to solve the PDE when it is nonlinear and the problem dimension is high, making it impossible to derive exact expressions for optimal controls for most nontrivial problems of interest. The optimal can only be computed in special cases, such as linear dynamics and quadratic cost, or very low-dimensional systems. In particular, if the plant is linear time invariant (LTI) and the (infinite-time) performance index is quadratic, then the corresponding HJB equation for this infamous linear-quadratic regulator (LQR) problem reduces to an algebraic Riccati equation (ARE). Contrary to the well-developed and widely applied theory and computational tools for theRiccati equation (for example, see [1]), theHJB equation is difficult, if not impossible, to solve for most practical applications. The exact solution for the optimal control policies is very complex

293 citations

Journal ArticleDOI
TL;DR: The capabilities and design flexibility of SDRE control are emphasized, addressing the issues on systematic selection of the design matrices and going into detail concerning the art of systematically carrying out an effective SDRE design for systems that both do and do not conform to the basic structure and conditions required by the method.

288 citations

Proceedings ArticleDOI
28 Jun 2000
TL;DR: In this article, the state-dependent Riccati equation method of nonlinear regulation is used to control the position and attitude of a spacecraft in the proximity of a tumbling target.
Abstract: Spacecraft which are required to remove space debris or collect disabled satellites must be able to achieve the attitude of the target while being positioned at a desired distance from the target. The six degree of freedom motion of a spacecraft performing rotational and translational maneuvers has nonlinear equations of motion. The state-dependent Riccati equation method of nonlinear regulation is used to control the position and attitude of a spacecraft in the proximity of a tumbling target. A six degree of freedom simulation of the spacecraft and target are utilized to demonstrate the effectiveness of the controller.

169 citations

Journal ArticleDOI
TL;DR: In this paper, the Riccati equation is used to describe the relative position components of the target with respect to a moving mass along the pitch and yaw axes of the body yc, zc = moving-mass position commands.
Abstract: A x , B x = state-dependent system matrices of size n n and n m Q x , R x = state-dependent weighting matrices of sizes n n and m m r, _ r = range and range rate of the target with respect to the missile S = solution to the Riccati equation T = rocket motor thrust per unit mass acting along the longitudinal axis of the missile u = control vector of size m 1 upert = control perturbation vector of size m 1 x = state vector of size n 1 xpert = state perturbation vector of size n 1 X, Y, Z = relative position components of the target with respect to the missile y, z = position of the moving masses along the pitch and yaw axes with respect to the body yc, zc = moving-mass position commands , = pitch and yaw Euler angles of the missile y, z = line-of-sight angles

112 citations