scispace - formally typeset
Search or ask a question

Showing papers on "Optimal control published in 1977"


Journal ArticleDOI
TL;DR: In this article, it is shown that the existence theorem can be applied to guarantee that a solution exists, and then by comparing all the candidates for optimality that the necessary conditions produce, we can in principle pick out an optimal solution to the problem.
Abstract: During the last ten years or so a large number of papers in professional journals in economics dealing with dynamic optimization problems have been employing the modern version of the calculus of variations called optimal control theory. The central result in the theory is the well known Pontryagin maximum principle providing necessary conditions for optimality in very general dynamic optimization problems. These conditions are not, in general, sufficient for optimality. Of course, if an existence theorem can be applied guaranteeing that a solution exists, then by comparing all the candidates for optimality that the necessary conditions produce, we can in principle pick out an optimal solution to the problem. In several cases, however, there is a more convenient method that can be used. Suppose that a solution candidate suggests itself through an application of the necessary conditions, or possibly also by a process of informed guessing. Then, if we can prove that the solution satisfies suLfficiency conditions of the type considered in this paper, then these conditions will ensure the optimality of the solution. In such a case we need not go through the process of finding all the candidates for optimality, comparing them and finally appealing to all existence thieorem. In ordcle to get all idea of what types of conditions might be involved in such sufficiency theorems, it is natural to look at the corresponding problem in static optimization. Here it is well known that the first order calculus or Kulhn-TLcker conditions are sU11ficient for optimality, provided suitable concavity/convexity conditions are imposed on the functions involved. It is natural to expect that similar conditions might secure sufficiency also in dynamic optimization problems. Growth theorists were early aware of this and proofs of sufficienccy in particular problems were constructed; see, e.g., Uzawa's 1964 paper [19]. In the mathematical literature few and only rather special results were available until Mangasarian. in a 1966 paper [10] proved a rather general sufficiency theorem in which he was dealing with a nonlinear system, state and control variable constraints and a fixed time interval. In the maximization case, when there are no state space constraints, his result was, essentially, that the Pontryagin necessary conditions plus concavity of the Hamiltonian function with respect to the state and control variables, were sufficient for optimality. The Mangasarian concavity condition is rather strong and in many economic problems his theorem does not apply. Arrow [1] proposed an interesting partial

251 citations


Book
01 Jan 1977
TL;DR: This monograph is intended for use in a one-semester graduate course or advanced undergraduate course and contains the principles of general control theory and proofs of the maximum principle and basic existence theorems of optimal control theory.
Abstract: In the late 1950's, the group of Soviet mathematicians consisting of L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze, and E. F. Mishchenko made fundamental contributions to optimal control theory. Much of their work was collected in their monograph, The Mathematical Theory of Optimal Processes. Subsequently, Professor Gamkrelidze made further important contributions to the theory of necessary conditions for problems of optimal control and general optimization problems. In the present monograph, Professor Gamkrelidze presents his current view of the fundamentals of optimal control theory. It is intended for use in a one-semester graduate course or advanced undergraduate course. We are now making these ideas available in English to all those interested in optimal control theory. West Lafayette, Indiana, USA Leonard D. Berkovitz Translation Editor Vll Preface This book is based on lectures I gave at the Tbilisi State University during the fall of 1974. It contains, in essence, the principles of general control theory and proofs of the maximum principle and basic existence theorems of optimal control theory. Although the proofs of the basic theorems presented here are far from being the shortest, I think they are fully justified from the conceptual view point. In any case, the notions we introduce and the methods developed have one unquestionable advantage -they are constantly used throughout control theory, and not only for the proofs of the theorems presented in this book.

239 citations


Journal ArticleDOI
TL;DR: A recent survey of dynamic optimal control models for advertising can be found in this paper, where the authors present an up-to-date survey of advertising capital models, sales-advertising response models, micromodels and control-theoretic empirical studies.
Abstract: The last ten years have seen a growing number of optimal control theory applications to the field of advertising. This paper presents an up-to-date survey of dynamic optimal control models in advertising that have appeared in the literature.The basic problem underlying these models is an optimal control problem to determine the optimal rate of advertising expenditures over time in a way that maximizes the present value of a firm’s net profit streams over a finite or infinite horizon. The profit depends on sales (or an appropriate surrogate), the state variable and the rate of advertising expenditures, the control variable. Sales, in turn, is related to advertising expenditures via a differential or difference equation termed a state equation.The models covered in this survey are organized under four headings: advertising capital models, sales-advertising response models, micromodels, and control-theoretic empirical studies. The discussion involves specifications, methods used, results and the economic sig...

236 citations


Proceedings ArticleDOI
C. Harvey1, Gunter Stein1
01 Dec 1977
TL;DR: In this paper, a new procedure for selecting weighting matrices in linear-quadratic optimal control designs is described based on asymptotic modal characteristics of multivariable LQ-regulators as control weights tend to zero.
Abstract: This paper describes a new procedure for selecting weighting matrices in linear-quadratic optimal control designs. The procedure is based on asymptotic modal characteristics of multivariable LQ-regulators as control weights tend to zero. The asymptotic behavior of both eigenvalues and eigenvectors is used to provide complete, unique specification of the weighting matrices. The procedure is illustrated with a simplified lateral-directional flight control design example.

230 citations


Book
01 Jan 1977
TL;DR: Guided weapon control systems, Guidedweapon control systems , مرکز فناوری اطلاعات و اصاع رسانی, کδاوρزی.
Abstract: Guided weapon control systems , Guided weapon control systems , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی

222 citations


Journal ArticleDOI
TL;DR: Mathematical methods of Markov chain theory are used to prove the inherent lnstablhty of the system and prove that the optimal pohcy which maximizes the maximum achievable throughput wtth a stable channel is found.
Abstract: The purpose of this paper is to analyze and optimize the behavior of the broadcast channel for a packet transmission operating in the slotted mode Mathematical methods of Markov chain theory are used to prove the inherent lnstablhty of the system If no control is apphed, the effective throughput of the system will tend to zero tf the population of user terminals ~s sufficiently large Two classes of control pohcles are examined, the first acts on admissions to the channel from active terminals, and the second modifies the retransmlss~on rate of packets In each case sufflc~ent conditions for channel stability are given. In the case of retransm~sslon controls it is shown that only pohcles which assure a rate of retransmlsslon from each blocked terminal of the form off = 1/n, where n is the total number of blocked terminals, will yield a stable channel It ts also proved that the optimal pohcy which maximizes the maximum achievable throughput wtth a stable channel IS of the formf = (1 - k)/n Simulations illustrating channel lnstabdlty and the effect of the opnmal control are prowded

211 citations


Journal ArticleDOI
TL;DR: This paper provides a review of one of the basic problems of systems theory-the general time-invariant optimal control problem involving linear systems and quadratic costs, developed using simple properties of dynamical systems and involves a minimum of 'hard' analysis or algebra.

205 citations


Journal ArticleDOI
TL;DR: In this paper, the stabilizing property of linear quadratic state feedback (LQSF) design is used to obtain a quantitative measure of the robustness of LQSF designs in the presence of perturbations.
Abstract: The well-known stabilizing property of linear quadratic state feedback (LQSF) design is used to obtain a quantitative measure of the robustness of LQSF designs in the presence of perturbations. Bounds are obtained for allowable nonlinear, time-varying perturbations such that the resulting closed-loop system remains stable. The special case of linear, time-invariant perturbations is also treated. The bounds are expressed in terms of the weighting matrices in a quadratic performance index and the corresponding positive definite solution of the algebraic matrix Riccati equation, and are easy to compute for any given LQSF design. A relationship is established between the perturbation bounds and the dominant eigenvalues of the closed-loop optimal system model. Some interesting asymptotic properties of the bounds are also discussed. An autopilot for the flare control of the Augmentor Wing Jet STOL Research Aircraft (AWJSRA) is designed, based on LQSF theory, and the results presented in this paper. The variation of the perturbation bounds to changes in the weighting matrices in the LQSF design is studied by computer simulations, and appropriate weighting matrices are chosen to obtain a reasonable bound for perturbations in the system matrix and at the same time meet the practical constraints for the flare maneuver of the AWJSRA. Results from the computer simulation of a satisfactory autopilot design for the flare control of the AWJSRA are presented.

164 citations


Journal ArticleDOI
TL;DR: In this paper, necessary conditions for the switching function holding at junction points of optimal interior and boundary arcs or at contact points with the boundary are given, where the transition from unconstrained to constrained extremals is discussed with respect to the order p of the state constraint.
Abstract: Necessary conditions for the switching function, holding at junction points of optimal interior and boundary arcs or at contact points with the boundary, are given. These conditions are used to derive necessary conditions for the optimality of junctions between interior and boundary arcs. The junction theorems obtained are similar to those developed for singular control problems in [1] and establish a duality between singular control problems and control problems with bounded state variables and control appearing linearly. The transition from unconstrained to constrained extremals is discussed with respect to the order p of the state constraint. A numerical example is given where the adjoins variables are not unique but form a convex set which is determined numerically.

158 citations



Journal ArticleDOI
TL;DR: The optimal control strategy with state variable constraints is studied and a numerical solution method is proposed for determining the optimal control for the case in which the intersection demand is predictable for the entire control period.

Journal ArticleDOI
TL;DR: An abstract model is proposed for the problem of optimal control of systems subject to random perturbations, for which the principle of optimality takes on an appealing form and the additional structure permits operationally useful optimality conditions.
Abstract: The paper proposes an abstract model for the problem of optimal control of systems subject to random perturbations, for which the principle of optimality takes on an appealing form. This model is specialized to the case where the state of the controlled system is realized as a jump process. The additional structure permits operationally useful optimality conditions. Some illustrative examples are solved.

Journal ArticleDOI
TL;DR: It is shown that nonlinear feedback solutions can be obtained, even for EM problem formulations which currently result in a two-point boundary-value problem, and a nonlinear controller for two-dimensional, minimum time aircraft climbs is derived.
Abstract: This paper develops a singular perturbation approach to extend existing energy managment (EM} methods. A procedure is outlined for modeling altitude and flight path angle dynamics which are ignored in EM solutions. It is shown that nonlinear feedback solutions can be obtained, even for EM problem formulations which currently result in a two-point boundary-value problem. A nonlinear controller for two-dimensional, minimum time aircraft climbs is derived and numerical results for a fighter aircraft are given. The procedure outlined in this paper is general and applicable to solving a wide class of optimal control problems. It avoids the problem of picking the unknown boundary conditions at the initial and terminal times to suppress the unstable modes in the boundary layer.

Journal ArticleDOI
TL;DR: In this article, mathematical models of the control behavior of human drivers while following another vehicle in single lane traffic are presented. But the focus is on the representation of the individual driver, rather than on such abstract parameters of multi-lane traffic as average density or average velocity.
Abstract: This paper is concerned with mathematical models of the control behavior of human drivers while following another vehicle in single lane traffic The emphasis is on the representation of the individual driver, rather than on such abstract parameters of multi-lane traffic as average density or average velocity Three basic types of approaches to representing the driver's control strategy are reviewed First is a classical control structure in which assumptions concerning the stimulus-response characteristics of the driver are included, and a form for his control strategy algorithm is assumed The second class of models is based on optimal control theory The major feature of this class of models is that an assumed performance index is explicitly included in the formulation, so that the driver's control strategy arises as a result of his attempts to minimize this index or criterion The third class of models reviewed in the paper are heuristic models, which arise from control theory The first of these, ter

Journal ArticleDOI
TL;DR: It is shown that coordination of the signals, according to analytical relationships developed here, is necessary for optimal operation and it is demonstrated that the entire congestion period must be divided into two intervals and that the optimal control in both intervals is determined by considering separate test functions.

Journal ArticleDOI
TL;DR: In this article, it was shown that the optimal path is a near-feasible path to the stationary level of a single-state single-control problem, as long as the sustainable level is reached by a feasible control.
Abstract: Many infinite-horizon optimal control problems in management science and economics have optimal paths that approach some stationary level. Often, this path has the property of being the nearest feasible path to the stationary equilibrium. This paper obtains a simple multiplicative characterization for a single-state single-control problem to have this property. By using Green's theorem it is shown that the property is observed as long as the stationary level is sustainable by a feasible control. If not, the property is, in general, shown to be false. The paper concludes with an important theorem which states that even in the case of multiple equilibria, the optimal path is a nearest feasible path to one of them.

01 Oct 1977
TL;DR: A rating hypothesis is introduced which relates the numerical pilot opinion rating assigned to a particular vehicle andtask to the numerical value of the index of performance resulting from an optimal pilot modeling procedure as applied to that vehicle and task.
Abstract: A brief review of some of the more pertinent applications of analytical pilot models to the prediction of aircraft handling qualities is undertaken. The relative ease with which multiloop piloting tasks can be modeled via the optimal control formulation makes the use of optimal pilot models particularly attractive for handling qualities research. To this end, a rating hypothesis is introduced which relates the numerical pilot opinion rating assigned to a particular vehicle and task to the numerical value of the index of performance resulting from an optimal pilot modeling procedure as applied to that vehicle and task. This hypothesis is tested using data from piloted simulations and is shown to be reasonable. An example concerning a helicopter landing approach is introduced to outline the predictive capability of the rating hypothesis in multiaxis piloting tasks.

Posted Content
TL;DR: In this paper, it was shown that the optimal path is a near-feasible path to the stationary level of a single-state single-control problem, as long as the sustainable level is reached by a feasible control.
Abstract: Many infinite-horizon optimal control problems in management science and economics have optimal paths that approach some stationary level. Often, this path has the property of being the nearest feasible path to the stationary equilibrium. This paper obtains a simple multiplicative characterization for a single-state single-control problem to have this property. By using Green's theorem it is shown that the property is observed as long as the stationary level is sustainable by a feasible control. If not, the property is, in general, shown to be false. The paper concludes with an important theorem which states that even in the case of multiple equilibria, the optimal path is a nearest feasible path to one of them.

Journal ArticleDOI
TL;DR: In this paper, the optimal control of nonlinear dynamical systems on a finite time interval is considered and the existence of a solution is proved and a power series solution of both the problems is constructed.
Abstract: In this paper the optimal control of nonlinear dynamical systems on a finite time interval is considered. The free end-point problem as well as the fixed end-point problem is studied. The existence of a solution is proved and a power series solution of both the problems is constructed.

Journal ArticleDOI
TL;DR: In this paper, a multi-pass dynamic programming method was used to find the optimal trajectories for up to five maneuverable generators, using a new multi-stage dynamic programming approach.
Abstract: Current research in Automatic Generation Control emphasizes coordination of the regulation and the economic dispatch functions into a single systems problem. A dynamic optimal control problem formulation has previously been suggested. In this paper optimal trajectories are found for up to five maneuverable generators, using a new multi-pass dynamic programming method to make such solutions feasible. Dynamic valve point loading and singular solutions are considered. Computer studies have applied the method to several examples, including sudden changes in area load, and supplying the morning rise in area load.

Journal ArticleDOI
TL;DR: The problem of forcing the state of a linear, multivariable, sampled-data system to zero in a minimum number of time steps is discussed and the solution to the posed problem is given as linear state feedbacks.

Journal ArticleDOI
L.E. Bobisud1
TL;DR: In this article, the optimal control by immunization of a general deterministic model of an epidemic is examined when cost is measured by the maximum number of infectives plus a measure of the control effort expended.
Abstract: Optimal control by immunization of a general deterministic model of an epidemic is examined when cost is measured by the maximum number of infectives plus a measure of the control effort expended. Results guaranteeing that the optimal control has one switch are presented, as are conditions under which the optimal control is to allow the epidemic to run its course unchecked. Optimal control by more efficient removal of infectives is also considered.

Journal ArticleDOI
TL;DR: It is shown that a singular level of advertising exists and that it is smaller than the singular or optimal stationary level obtained by Nerlove and Arrow and two simple computational algorithms are provided to obtain the optimal path including the end-game situation as a result of the finite horizon.
Abstract: This paper considers an optimal control problem for the dynamics of the Nerlove-Arrow advertising model, the optimal control being the rate of advertising expenditure required to maximize the present value of net profit streams (or, sales) over a finite horizon subject to a budget constraint. The maximum available advertising budget is given in present-value terms. It is shown that a singular level of advertising exists and that it is smaller than the singular or optimal stationary level obtained by Nerlove and Arrow. Along with the forms of the optimal control in all possible cases, two simple computational algorithms are also provided to obtain the optimal path including the end-game situation as a result of the finite horizon. Closed-form solutions are worked out for simple examples illustrating the use of the algorithm. An analogy with a car with a given amount of fuel (budget) concludes the paper.


W. Johnson1
01 May 1977
TL;DR: In this paper, an optimal control solution for the descent and landing of a helicopter after the loss of power in level flight was obtained for the purpose of minimizing the vertical and horizontal velocity at contact with the ground.
Abstract: An optimal control solution is obtained for the descent and landing of a helicopter after the loss of power in level flight. The model considers the helicopter vertical velocity, horizontal velocity, and rotor speed; and it includes representations of ground effect, rotor inflow time lag, pilot reaction time, rotor stall, and the induced velocity curve in the vortex ring state. The control (rotor thrust magnitude and direction) required to minimize the vertical and horizontal velocity at contact with the ground is obtained using nonlinear optimal control theory. It is found that the optimal descent after power loss in hover is a purely vertical flight path. Good correlation, even quantitatively, is found between the calculations and (non-optimal) flight test results.

Journal ArticleDOI
TL;DR: In this paper, singular perturbation theory is applied to the stochastic control for the linear quadratic Gaussian (L-Q-G) problem for systems with fast and slow modes.
Abstract: This paper applies singular perturbation theory to the stochastic control for the Linear-Quadratic-Gaussian (L-Q-G) problem for systems with fast and slow modes. The limiting behavior of the optimal control and the performance index is investigated. It is shown that the optimal control can be approximated by a near optimal control which is obtained as a combination of a slow control and a fast control computed in separate time scales.

Journal ArticleDOI
01 Mar 1977
TL;DR: A new approach to the development of multilevel control and estimation schemes for large-scale systems with a major emphasis on the reliability of performance under structural perturbations is described, conducted within a decomposition-decentralization framework.
Abstract: A new approach to the development of multilevel control and estimation schemes for large-scale systems with a major emphasis on the reliability of performance under structural perturbations is described. The study is conducted within a decomposition-decentralization framework and leads to simple and noniterative control and estimation schemes. The solution to the control problem involves the design of a set of locally optimal controllers for the individual subsystems in a completely decentralized environment and a global controller on a higher hierarchical level that provides corrective signals to account for the interconnection effects. Similar principles are employed to develop an estimation scheme, which consists of a set of decentralized optimal estimators for the subsystems, together with certain compensating signals for measurements. The principal feature of the paper is a detailed study of the system structure and the consequent classification of interconnection patterns into several interesting categories (beneficial, nonbeneficial, and neutral) based on their effects on decentralized control and estimation.

Journal ArticleDOI
TL;DR: In this article, the theory of optimal control is applied to intermittent heating and the general characteristics of the control are determined and their strategies compared with the performance of existing "optimum start controls".

Journal ArticleDOI
TL;DR: In this paper, the optimal control of a stochastic system with both complete and partial observations is considered, and it is shown that, almost surely, the optimum control should minimize the conditional expectation of a certain Hamiltonian, with respect to an optimum measure and the observed $\sigma $-field.
Abstract: The optimal control of a stochastic system with both complete and partial observations is considered. In the completely observable case, because the cost function is, in the terminology of Meyer, a “semimartingale speciale,” a dynamic programming condition for the optimal control is obtained in terms of a certain Hamiltonian. The partially observable case is then discussed from first principles, and it is shown that, almost surely, the optimum control should minimize the conditional expectation of a certain Hamiltonian, with respect to an optimum measure and the observed $\sigma $-field.

Journal ArticleDOI
TL;DR: In this article, the optimal control of a system where the state is modeled by a homogeneous diffusion process in $R^1 $ was studied. And sufficient conditions were found to determine the optimal policy in both an infinite horizon case with discounting and a finite horizon case.
Abstract: This paper concerns the optimal control of a system where the state is modeled by a homogeneous diffusion process in $R^1 $. Each time the system is controlled a fixed cost is incurred as well as a cost which is proportional to the magnitude of the control applied. In addition to the cost of control, there are holding or carrying costs incurred which are a function of the state of the system. Sufficient conditions are found to determine the optimal control in both an infinite horizon case with discounting and a finite horizon case. In both cases the optimal policy is one of “impulse” control originally introduced by Bensoussan and Lions [2] where the system is controlled only a finite number of times in any bounded time interval and the control requires an instantaneous finite change in the state variable. The issue of the existence of such controls is not addressed.