scispace - formally typeset
Search or ask a question

Showing papers on "Control variable published in 1969"


Journal ArticleDOI
TL;DR: In this article, a slack variable is used to transform an optimal control problem with scalar control and a scalar inequality constraint on the state variables into an unconstrained problem of higher dimension.
Abstract: A slack variable is used to transform an optimal control problem with a scalar control and a scalar inequality constraint on the state variables into an unconstrained problem of higher dimension. It is shown that, for a p th order constraint, the p th time derivative of the slack variable becomes the new control variable. The usual Pontryagin principle or Lagrange multiplier rule gives necessary conditions of optimality. There are no discontinuities in the adjoint variables. A feature of the transformed problem is that any nominal control function produces a feasible trajectory. The optimal trajectory of the transformed problem exhibits singular arcs which correspond, in the original constrained problem, to arcs which lie along the constraint boundary.

214 citations


Journal ArticleDOI
D.H. Chyung1
TL;DR: The problem of finding an optimal control in linear discrete systems with time delays in both the state variables and control variables is studied.
Abstract: The problem of finding an optimal control in linear discrete systems with time delays in both the state variables and control variables is studied.

36 citations


Journal ArticleDOI
TL;DR: As an alternative to the use of the Maximum Principle to calculate the optimal control, a simple extremum-seeking feedback control scheme is applied to optimize the process.

25 citations


Journal ArticleDOI
TL;DR: In this article, the problem of determining the optimal use of available renewal capability for first-order time-varying linear systems with a quadratic performance index, using an exponential model of component failure, is investigated.
Abstract: The design of systems with a component subject to failure may include a discrete renewal capability to counter the degradation in performance caused by improper operation of the failed component A solution is presented to the problem of determining the optimal use of available renewal capability for first-order time-varying linear systems with a quadratic performance index, using an exponential model of component failure The solution is extended quasi optimally to linear systems of arbitrary order The renewal policy determined is performance-adaptive in the sense that it depends on the failure and renewal histories of the system and on their relation to future operating requirements The renewal policy is determined with respect to other control variables to insure overall optimal performance A discrete stage variable is introduced to specify the system operating condition The optimal control and renewal policy within each stage is specified by a pair of Riccati equations whose solutions are precomputed The method avoids quantization of continuous-valued state variables, thereby lessening the effects of the curse of dimensionality

24 citations


Patent
Robert E Zumwalt1
24 Feb 1969
TL;DR: In this article, a general purpose digital computer is used to control a fluidized catalytic cracking unit for carbon balance and maximization of a secondary control function by using a general-purpose digital computer as the controlling means for variables such as the settings on the regenerator flue gas control valve and the regenerated catalyst circulation control valve.
Abstract: A fluidized catalytic cracking unit is controlled for carbon balance and maximization of a secondary control function by using a general purpose digital computer, which is responsive to various temperatures and pressures, as the controlling means for variables such as the settings on the regenerator flue gas control valve and the regenerated catalyst circulation control valve The process of the present invention involves obtaining a plurality of signals representing control variables, comparing these with the desired values for the control variables, and carrying out in a general purpose computer the calculation of a carbon adjustment factor according to the algorithm: WHERE EACH OF THE VARIABLES HAS THE DEFINITION SET FORTH IN THE SPECIFICATION When Delta SVn is positive, indicating a correction for carbonburning conditions, the desired control action is obtained by a logic sequence which determines whether either of the controlled variables is under constraint and if not, takes the control step most consistent with optimization of the secondary variable If one or both of the controlled variables are under constraint, the logic sequence indicates the correct control step to be taken or, if no control step can be taken, indicates that none can be taken When Delta SVn is negative, indicating a correction for carbonbuilding conditions, the logic sequence likewise allows the choice of the optimum control step to be taken or, if none can be taken, indicates this fact After the logic sequence based upon the algorithm has been completed, a signal is obtained to move the correct control valve (that is, make the required adjustment in the controlled variable), the signal being corrected to reflect the difference in control function (depending upon which valve is to be moved) and for the position of the valve immediately prior to movement to the new position By carrying out the control function of the present invention, carbon balance can be well controlled in a catalytic cracking unit while the secondary control variable (such as regenerator air velocity) can be controlled or maximized

11 citations


Journal ArticleDOI
TL;DR: Open loop control means a system where output has effect on the input in order to obtain the desired result and adaptive systems take change^in the environment into account, especially stochastic variations.
Abstract: 5W, 5 E INTRODUCE a few definitions. We have a system, in our examples simple linear difference or differential equations connecting certain inputs and outputs. There is a control variable which should enable us to produce some desired output. The output is the controlled variable. Open loop control exists where the output has no effect upon the input. Feedback provides the means of feeding back the output in order to enable a comparison with the desired output. Closed loop control means a system where output has effect on the input in order to obtain the desired result. Adaptive systems take change^in the environment into account, especially stochastic variations.

3 citations


01 Feb 1969
TL;DR: In this paper, the authors apply optimal control theory to a general maihematical model consisting of equations defining the interdependence of sets of variables characterizing the educational system, and apply this approach to reveal the values of a systems approach to social
Abstract: Identifiers -DYNAMO() Model, Mathematical mode) ipu.I[dIng for educational planning in this 'country has been heavily 'influenced.. by .the USOE DYNAMOD Model, a computerized Markov-type or input-output model. However, the input-output method is structurally inadequate to reflect the -true behavior of the educational system. To introducesome elements of decision making into the mbdel, spme investigators have attempted to apply optimal control theory. Application of optimal control theory involves the addition of control variables, which are constrained in the(r values and thus reflect political or policy limits, to a general maihematical model consisting of equations defining the interdependence of sets of variables characterizing the educational system. Control theory models are theoreti.cally attractive planning devices because they allow for the specification of a sy.stm!s Initial states and certain desired targets while providing for the Selection of a p6lity' which achieves these targets at a minimum cost while satisfying existing.constraintt. Although barriers to practical implementation exist, this approach promise§ to aid in revealing the values of a systems approach to social

2 citations


Journal ArticleDOI
TL;DR: In this article, a quasi-optimal system for time-sharing control is designed via the direct method of Lyapunov, which specifies control lows and the decision criterion for utilization of the controller.
Abstract: This paper is concerned with the problem of sampled-data control of a multi-variable system when only one control variable may be used at a time. A quasi-optimal system for time-sharing control is designed via the direct method of Lyapunov. The design specifies control lows and the decision criterion for utilization of the controller. It is shown that even with simplified decision criteria, stability properties are preserved. The method is developed for continuous plants containing at most a single integration. The results also apply to the classical adaptive-sampling problem where the objective is simply to conserve the number of controlling sampling intervals for a single process. A numerical example is presented of the time-shared control of four second-order systems, each containing an integration.

1 citations


01 Jan 1969
TL;DR: The linear quadratic control problem under the restriction that the control variable is constant over the sampling intervals is considered and algorithms and flow chart for numerical solution are presented.
Abstract: In this report we consider the linear quadratic control problem under the restriction that the control variable is constant over the sampling intervals. Algorithms and flow chart for numerical solution are presented. The program can be used to design optimal control systems and to compute optimal filters and predictors for implementation on process control computers.

1 citations


Journal ArticleDOI
TL;DR: In this paper, a two-level technique for optimal system design is proposed, in which trajectory sensitivity cost is included with state and control variable terms, and the resulting optimal system would be less sensitive to plant-parameter variations than one designed without such a trajectory-sensitivity cost term.
Abstract: A two-level technique for optimal system design is proposed, in which trajectory sensitivity cost is included with state and control variable terms. The resulting optimal system would be less sensitive to plant-parameter variations than one designed without such a trajectory-sensitivity cost term.

Journal ArticleDOI
TL;DR: In this paper, the optimization of discrete systems constrained in an unusual way is discussed and a simple functional equation suitable for carrying on the solution of the problem is derived via dynamic programming.
Abstract: The optimization of discrete systems constrained in an unusual way is discussed. Moreover, it is supposed that for such systems, here called "nonsemper," the selection of the control policy consists in the choice of the value of the control variables as well as in the allocation over time of the control action itself. A simple functional equation suitable for carrying on the solution of the problem is derived via dynamic programming.

Journal ArticleDOI
TL;DR: In this article, the authors considered a process having a quadratic performance criterion, the measurement of which is affected by known process time lags and by noise, the distribution of which was assumed known, and in which the position of the optimum is not affected by disturbances.
Abstract: The paper considers a process having a quadratic performance criterion, the measurement of which is affected by known process time lags and by noise, the distribution of which is assumed known, and in which the position of the optimum is affected by disturbances. The performance criterion is deemed controllable by a single control variable. A possible control strategy is to adjust the control variable by equal amounts at equal intervals of time, the sign of the change depending on whether the previous change led to an increase or decrease in the measured performance. This paper examines the effects of noise and lags, but not explicitly disturbances, on the control achieved by this strategy. Criteria are proposed for measuring performance of the control system under both steady-state and transient conditions. Two equations relating these two measures to the step sizes and time interval are established, and two methods of solving them to determine the measures for any step size and time interval are discuss...