scispace - formally typeset
Search or ask a question

Showing papers on "Sliding mode control published in 1969"


Journal ArticleDOI
TL;DR: In this paper, the authors discuss the possibility of lowering a system sensitivity by the application of a sliding mode, which can be obtained by means of control functions which on certain hypersurfaces have first order discontinuities in a state space.

832 citations


Journal ArticleDOI
D. Kleinman1
TL;DR: In this article, a convergent algorithm is developed for computing the optimal feedback gains for linear systems in which the intensity of the driving noise is proportional to the control input, and conditions are given under which an optimal control always exists.
Abstract: Optimal stochastic control is investigated for linear systems in which the intensity of the driving noise is proportional to control input. Conditions are given under which an optimal control always exists. It is shown that the optimal control is linear in the system state. A convergent algorithm is developed for computing the optimal feedback gains.

106 citations


Journal ArticleDOI
TL;DR: In this paper, a Taylor's series representation for the feedback gain matrix is used to obtain a suboptimal control law for minimizing a quadratic cost functional for nonlinear systems.
Abstract: A simple easily implemented method is developed for obtaining a suboptimal control law for the optimization problem associated with minimizing a quadratic cost functional for nonlinear systems. The suboptimal control law is derived using a Taylor's series representation for the feedback gain matrix after modeling the nonlinear system by a linear system at each instant of time. The resultant control law is of feedback form and is nonlinear in state. The suboptimal control is obtained without using iterative techniques or any true optimal solutions. A second-order numerical example illustrates the effectiveness of the method and gives a comparison to the results of previous methods.

48 citations


Journal ArticleDOI
Jane Cullum1
TL;DR: In this paper, it was shown that a continuous optimal control problem can be replaced by a sequence of finite-dimensional, discrete optimization problems in which the control and state variable constraints are treated directly.
Abstract: It is demonstrated that if P is a continuous optimal control problem whose system of differential equations is linear in the control and the state variables, and whose control and state variable constraint sets are convex, a direct method of determining an optimal solution of P exists. It is demonstrated that such a “continuous” problem can be replaced by a sequence of finite-dimensional, “discrete” optimization problems in which the control and state variable constraints are treated directly. The approximation obtained relates the respective optimal solutions.

47 citations


Journal ArticleDOI
TL;DR: In this paper, a closed-loop system is designed to minimize a quadratic performance index for all initial states of the system in the sense of minimizing a constant performance index.
Abstract: Optimal control laws usually require the complete measurement of the plant state However, in practice one often has available only a small number of measurements A procedure is developed that leads to a dynamic feedback control law which is a function of any given set of measurements The resulting closed-loop system is optimal for all initial states of the system in the sense of minimizing a quadratic performance index The order of the controller depends upon the observability properties of the plant The development is extended to time-variable problems

45 citations


Journal ArticleDOI
TL;DR: In this paper, an iterative procedure for time-optimal control of linear plants with constrained control amplitudes is presented, where the switching times are systematically adjusted until the control function closely approximates the known bang-bang form with n switchings.
Abstract: An iterative procedure for time-optimal control of linear plants with constrained control amplitudes is presented. It is assumed that the n th-order state-equation coefficient matrix has real eigenvalues, so that the time-optimal control is of the known bang-bang form with n switchings. The first step in the procedure is to arbitrarily choose n switching times (including the final time), and to calculate a precise constant control function which although not necessarily satisfying the amplitude constraints does bring the plant to the desired terminal state. In the following steps the switching times are systematically adjusted until the control function closely approximates the bang-bang form. The procedure is simple to implement, and experiments have shown fast convergence.

20 citations


Journal ArticleDOI
TL;DR: This paper considers continuous-time systems with linear system equations and quadratic performance criterion and finds that the computation of the optimal control for partially controlled systems can be split into the following parts: using a Riccati equation and a linear equation.
Abstract: In many optimal control problems the performance criterion depends in part on the behavior of a system that is not subject to control. Since the control affects only a subset of the system state variables, such systems are said to be partially controlled. This paper considers continuous-time systems with linear system equations and quadratic performance criterion. The major result of this correspondence is that the computation of the optimal control for partially controlled systems can be split into the following parts: first, the optimal control for the controlled system is computed using a Riccati equation; next, a linear equation is solved to obtain a term for the control that accounts for the behavior of the uncontrolled system. Hence, the problem of computing the optimal control for an ( n_{1} + n_{2} )-dimensional system, where n 1 is the dimension of the controlled system and n 2 is the dimension of the uncontrolled system, is essentially reduced to computing the optimal control for the n 1 -dimensional controlled system. This results in a significant reduction in the computational requirements.

11 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe the mode interaction between rigid-body and elastic motion of a large launch vehicle and demonstrate how to synthesize a control system based on state variable formulations of modern control theory.
Abstract: The phenomenon of mode interaction between rigid-body and elastic motion of a large launch vehicle is described and illustrated. For severe interaction, control system synthesis by the usual approach of assuming pitch attitude and rate feedback and use of conventional servo analysis techniques is not suitable. By use of a new method based on the state variable formulations of modern control theory, a simple practical control system is synthesized without the trial and error associated with servo analysis methods. Specified closed-loop frequencies and damping ratios, including active control of structural mode dynamics, are achieved using only the booster thrust vector control. No frequency tracking filters or feedback signal discrimination is needed. The only feedback signals necessary are those provided by angular displacement and acceleration sensors. Five simple first-order passive filters are the only compensation network dynamic elements required.

7 citations


Journal ArticleDOI
TL;DR: In this paper, a necessary condition for the optimal control of a class of integral equation constraint systems is derived by use of variational method with finite perturbation of the control variable.

4 citations


Journal ArticleDOI
TL;DR: In this article, a quasi-optimal system for time-sharing control is designed via the direct method of Lyapunov, which specifies control lows and the decision criterion for utilization of the controller.
Abstract: This paper is concerned with the problem of sampled-data control of a multi-variable system when only one control variable may be used at a time. A quasi-optimal system for time-sharing control is designed via the direct method of Lyapunov. The design specifies control lows and the decision criterion for utilization of the controller. It is shown that even with simplified decision criteria, stability properties are preserved. The method is developed for continuous plants containing at most a single integration. The results also apply to the classical adaptive-sampling problem where the objective is simply to conserve the number of controlling sampling intervals for a single process. A numerical example is presented of the time-shared control of four second-order systems, each containing an integration.

1 citations


Journal ArticleDOI
TL;DR: In this article, a discrete model of a delta modulated control system is developed and the necessary conditions for such modes of oscillation are established, at sampling instants, by using this model.
Abstract: A discrete model of a delta modulated control system is developed. By using this model, all modes of oscillation of the delta modulated control system, at sampling instants, can be found. Necessary conditions for such modes of oscillation are established.