scispace - formally typeset
Search or ask a question

Showing papers in "Siam Journal on Control and Optimization in 1976"


Journal ArticleDOI
TL;DR: In this paper, the proximal point algorithm in exact form is investigated in a more general form where the requirement for exact minimization at each iteration is weakened, and the subdifferential $\partial f$ is replaced by an arbitrary maximal monotone operator T.
Abstract: For the problem of minimizing a lower semicontinuous proper convex function f on a Hilbert space, the proximal point algorithm in exact form generates a sequence $\{ z^k \} $ by taking $z^{k + 1} $ to be the minimizes of $f(z) + ({1 / {2c_k }})\| {z - z^k } \|^2 $, where $c_k > 0$. This algorithm is of interest for several reasons, but especially because of its role in certain computational methods based on duality, such as the Hestenes-Powell method of multipliers in nonlinear programming. It is investigated here in a more general form where the requirement for exact minimization at each iteration is weakened, and the subdifferential $\partial f$ is replaced by an arbitrary maximal monotone operator T. Convergence is established under several criteria amenable to implementation. The rate of convergence is shown to be “typically” linear with an arbitrarily good modulus if $c_k $ stays large enough, in fact superlinear if $c_k \to \infty $. The case of $T = \partial f$ is treated in extra detail. Applicati...

3,238 citations


Journal ArticleDOI
TL;DR: In this article, a complete abstract invariant and a set of canonical forms under dynamic compensation for linear systems characterized by proper, rational transfer matrices are presented. But the complexity of the problem is not addressed.
Abstract: This paper is concerned with the development of a complete abstract invariant as well as a set of canonical forms under dynamic compensation for linear systems characterized by proper, rational transfer matrices. More specifically, it is shown that one can always associate with any proper rational transfer matrix, $T(s)$, a special lower left triangular matrix, $\xi _T (s)$, called the interactor. This matrix is then shown to represent an abstract invariant under dynamic compensation which, together with the rank of $T(s)$, represents a complete abstract invariant. A set of canonical forms under dynamic compensation is also developed along with appropriate dynamic compensation.

377 citations


Journal ArticleDOI
TL;DR: In this paper, the authors apply the methods developed in [1] and [2] to solve the problem of optimal stochastic control for a linear quadratic system.
Abstract: The purpose of this paper is to apply the methods developed in [1] and [2] to solve the problem of optimal stochastic control for a linear quadratic system.After proving some preliminary existence results on stochastic differential equations, we show the existence of an optimal control.The introduction of an ad joint variable enables us to derive extremality conditions: the control is thus obtained in random “feedback” form. By using a method close to the one used by Lions in [4] for the control of partial differential equations, a priori majorations are obtained.A formal Riccati equation is then written down, and the existence of its solution is proved under rather general assumptions.For a more detailed treatment of some examples, the reader is referred to [1].

307 citations


Journal ArticleDOI
TL;DR: In this paper, the optimal control system with given initial and terminal constraints and a cost functional is considered, and necessary conditions for optimality are derived in a form similar to Pontryagin's maximum principle under hypotheses which are in a certain sense minimal in order that the problem be meaningful.
Abstract: We consider the optimal control system \[\dot x(t) = f(t,x(t),u(t)),\quad u(t) \in U(t)\quad {\text{a.e.}}\] with given initial and terminal constraints and a cost functional. We derive necessary conditions for optimality in a form similar to Pontryagin’s maximum principle under hypotheses which are in a certain sense minimal in order that the problem be meaningful. In particular we do not assume $f(t,s,u)$ continuous in u or differentiable in s, nor do we require $U(t)$ or $f(t,s,U(t))$ to be bounded or closed. These necessary conditions, which are expressed in terms of certain “generalized Jacobians,” reduce to the usual ones when classical hypotheses are imposed.

193 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that all local martingales of the fields generated by a jump process of very general type can be represented as stochastic integrals with respect to a fundamental family of martingale associated with the jump process.
Abstract: In this paper it is shown that all local martingales of the $\sigma $-fields generated by a jump process of very general type can be represented as stochastic integrals with respect to a fundamental family of martingales associated with the jump process.

125 citations


Journal ArticleDOI
TL;DR: In this article, a generalized class of quadratic penalty function methods for nonconvex nonlinear programming problems is considered and convergence and rate of convergence results for the sequences of primal and dual variables generated are obtained.
Abstract: In this paper we consider a generalized class of quadratic penalty function methods for the solution of nonconvex nonlinear programming problems. This class contains as special cases both the usual quadratic penalty function method and the recently proposed multiplier method. We obtain convergence and rate of convergence results for the sequences of primal and dual variables generated. The convergence results for the multiplier method are global in nature and constitute a substantial improvement over existing local convergence results. The rate of convergence results show that the multiplier method should be expected to converge considerably faster than the pure penalty method. At the same time, we construct a global duality framework for nonconvex optimization problems. The dual functional is concave, everywhere finite, and has strong differentiability properties. Furthermore, its value, gradient and Hessian matrix within an arbitrary bounded set can be obtained by unconstrained minimization of a certain...

115 citations


Journal ArticleDOI
TL;DR: In this article, the basic dual problem and extended dual problem associated with a two-stage stochastic program are shown to be equivalent, if the program is strictly feasible and satisfies a condition generalizing, in a sense, the condition of relatively complete recourse.
Abstract: The basic dual problem and extended dual problem associated with a two-stage stochastic program are shown to be equivalent, if the program is strictly feasible and satisfies a condition generalizing, in a sense, the condition of relatively complete recourse in stochastic linear programming. Combined with earlier results, this yields the fact that, under the same assumptions, solutions to the program can be characterized in terms of saddle points of the basic Lagrangian. A couple of examples are used to illustrate the salient points of the theory. The last section contains a review of the principal implications of the results of this paper combined with those of three preceding papers also devoted to stochastic convex programs.

95 citations


Journal ArticleDOI
TL;DR: In this paper, a generalization of the rank condition for controllability and observability of linear autonomous finite-dimensional systems to the general case when both the state space and the control space are infinite-dimensional Banach spaces and the operator A acting on the state is only assumed to generate a strongly continuous semigroup (group) is sought.
Abstract: Generalizations of the familiar rank conditions for controllability and observability of linear autonomous finite-dimensional systems to the general case when both the state space and the control space are infinite-dimensional Banach spaces and the operator A acting on the state is only assumed to generate a strongly continuous semigroup (group) are sought. It is shown that a suitable version of the rank condition, although generally only sufficient for approximate controllability (observability), is however “essentially” necessary and sufficient in two important cases: (i) when A generates an analytic semigroup, (ii) when A generates a group. Such generalization of the rank condition is then used to derive, in turn, easy-to-check tests for approximate controllability (observability) for the important class of normal operators with compact resolvent. In the case of finite number of scalar controls (observations), the tests are expressed by a sequence of rank conditions, using the complete set of eigenvect...

86 citations


Journal ArticleDOI
TL;DR: A class of combined primal–dual and penalty methods for constrained minimization which generalizes the method of multipliers is proposed and analyzed and it is shown that the rate of convergence may be linear or superlinear with arbitrary Q-order of convergence depending on the problem at hand.
Abstract: In this paper we propose and analyze a class of combined primal–dual and penalty methods for constrained minimization which generalizes the method of multipliers. We provide a convergence and rate of convergence analysis for these methods for the case of a convex programming problem. We prove global convergence in the presence of both exact and inexact unconstrained minimization, and we show that the rate of convergence may be linear or superlinear with arbitrary Q-order of convergence depending on the problem at hand and the form of the penalty function employed.

79 citations


Journal ArticleDOI
TL;DR: In this article, a two-person zero-sum differential game is considered, whose dynamics are interpreted using the Girsanov measure transformation method, and it is shown that the upper and lower values of the game are equal and there is a saddle point in feedback strategies.
Abstract: Using the techniques of Davis and Varaiya [3], [4] a two-person zero sum differential game is considered, whose dynamics are interpreted using the Girsanov measure transformation method. If the Isaacs condition holds it is shown that the upper and lower values of the game are equal and there is a saddle point in feedback strategies. The central point of the mathematics is that analogues of the time derivative and gradient of the upper value function are constructed using martingale methods; because the Hamiltonian satisfies a saddle condition at each point these then also give the lower value.

77 citations


Journal ArticleDOI
TL;DR: In this article, the convergence properties for the solution of the discrete time Riccati matrix equation are extended to the case of a gyroscope noise filtering problem, and the stability results are generalized to time-varying problems.
Abstract: The convergence properties for the solution of the discrete time Riccati matrix equation are extended to Riccati operator equations such as arise in a gyroscope noise filtering problem. Stabilizability and detectability are shown to be necessary and sufficient conditions for the existence of a positive semidefinite solution to the algebraic Riccati equation which has the following properties (i) it is the unique positive semidefinite solution to the algebraic Riccati equation, (ii) it is converged to geometrically in the operator norm by the solution to the discrete Riccati equation from any positive semidefinite initial condition, (iii) the associated closed loop system converges uniformly geometrically to zero and solves the regulator problem, and (iv) the steady state Kalman–Bucy filter associated with the solution to the algebraic Riccati equation is uniformly asymptotically stable in the large. These stability results are then generalized to time-varying problems; also it is shown that even in infini...

Journal ArticleDOI
TL;DR: In this article, it was shown that the transmission polynomials of rational functions which appear in the Smith-McMillan form of the transfer matrix of the rational functions are controllable under mild assumptions.
Abstract: It is shown for the controllable linear system $\dot x = Ax + Bu + Dv$, $y = Cx$ that there exists a feedback map F for which $\dot x = (A + DFC)x + Bu$ is controllable if and only if the number of transmission polynomials of $(C,A,B)$ is no greater than the rank of the (nonzero) transfer matrix of $(C,A,B)$. If this condition fails to hold, then for all F, the spectrum of $A + DFC$ contains a uniquely determined subset of transmission zeros, and this subset coincides with the spectrum of $A + DFC$ modulo the controllable space of $(A + DFC,B)$ whenever F is selected so that the dimension of the controllable space is as large as possible. Under mild assumptions, the transmission polynomials are identified as the numerator polynomials of the rational functions which appear in the Smith–McMillan form of the transfer matrix of $(C,A,B)$.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the linear, quadratic control and filtering problems for systems defined by integral equations given in terms of evolution operators and prove that the solution to both problems leads to an integral Riccati equation which possesses a unique solution.
Abstract: In the paper we consider the linear, quadratic control and filtering problems for systems defined by integral equations given in terms of evolution operators. We impose very weak conditions on the evolution operators and prove that the solution to both problems leads to an integral Riccati equation which possesses a unique solution. By imposing more structure on the evolution operator we show that the integral Riccati equation can be differentiated, and finally by considering an even smaller class of evolution operators we are able to prove that the differentiated version has a unique solution. The motivation for the study of such systems is that they enable us to consider wide classes of differential delay equations and partial differential equations in the same formulation. We derive new results for such a system and show how all of the existing results can be obtained directly by our methods.

Journal ArticleDOI
TL;DR: In this paper, the problem of minimizing a functional of the type L(x(0),x(1)) + L(t,x,\dot x)dt,} where l and L are permitted to attain the value $ + √ √ n + n/ √ log n is considered.
Abstract: We consider the problem of minimizing a functional of the type \[l(x(0),x(1)) + \int_0^1 {L(t,x,\dot x)dt,} \] where l and L are permitted to attain the value $ + \infty $. We show that many standard variational and optimal control problems may be expressed in this form. In terms of certain generalized gradients, we obtain necessary conditions satisfied by solutions to the problem, in the form of a generalized Euler–Lagrange equation. We also extend the necessary condition of Weierstrass to this setting. The results obtained allow one to treat not only the standard problems but others as well, bringing under one roof the classical (differentiable) situation, the cases where convexity assumptions replace differentiability, and new problems where neither intervene. We apply the results in the final section to derive a new version of the maximum principle of optimal control theory.

Journal ArticleDOI
TL;DR: In this paper, an infinite-dimensional theory in the same style is presented whereby the input-output maps are assumed to exhibit some energy conservation properties and whereby these maps are ascertained to belong to a class of systems with nontrivial nullspace, called "roomy" systems.
Abstract: In the classical dynamic systems theory, precise information about a system can be deduced from its input–output map. In fact, for minimal systems, a complete pole-zero theory can be constructed using polynomial coprime factorization techniques, together with algebraic properties of the state space module. In this paper, an infinite-dimensional theory in the same style is presented whereby the input–output maps are assumed to exhibit some energy conservation properties and whereby these maps are ascertained to belong to a class of systems with nontrivial nullspace, called “roomy” systems. As a result, a coprime factorization theory can be deduced based not on properties of polynomials but of analytical functions, a complete polar description of the system can be given and a zero description for a somewhat more restricted class. The mathematical tools used lean heavily on Helson and Lowdenslaeger’s invariant subspace theory of Hardy spaces which quite naturally comes into play through the Bochner–Chandrase...

Journal ArticleDOI
TL;DR: In this paper, the authors study the stability properties of a dynamic system on a metric space and give necessary and sufficient criteria for stability of sets and points of the dynamic system, and show that a subset of the system is stable iff it is an inverse image of a Pareto minimal point of a vector-valued function which decreases along the sequence.
Abstract: Let X be a metric space. A dynamic system on X is a set-valued function $\varphi $ from X to X which satisfies $\varphi (x) e \emptyset $ for $x \in X$. It generates $\varphi $-sequences: \[x^{(t + 1)} \in \varphi (x^{(t)} ),\quad t = 0,1,2, \cdots ,x^{(0)} \in X.\]We study the stability properties of such dynamic systems. Necessary and sufficient criteria for stability of sets and points are given. The main result is, essentially, that a subset of X is stable iff it is an inverse image of a Pareto minimal point of a vector-valued function which decreases along $\varphi $-sequences.As a corollary we obtain a characterization of all stable sets and points of Stearns’ transfer schemes as generalized nucleoli. In particular, the “lexicographic kernel”, is always a stable set of the bargaining sets which may not include the nucleolus. All nonempty $\varepsilon $-cores are also stable sets of the bargaining sets.

Journal ArticleDOI
TL;DR: In this article, the controllability of a linear autonomous differential difference equation of neutral type on the Sobolev state space was studied and sufficient and necessary conditions for the exact state control were given.
Abstract: Necessary and sufficient conditions for the exact state controllability of the linear autonomous differential difference equation of neutral type, $\dot x(t) = A_{ - 1} \dot x(t - h) + A_0 x(t) + A_1 x(t - h) + Bu(t)$, are given for the Sobolev state space $W_2^{(1)} ([ - h,0],R^n )$. In particular when B is an $n \times 1$ matrix, it is shown that the controllability of the above n-dimensional system on the interval $[0,\tau ]$, $\tau > nh$, is equivalent to rank $[B,A_{ - 1} B, \cdots ,A_{ - 1}^{n - 1} B] = n$ and that a certain two point boundary value problem for a related homogeneous ordinary differential equation have only the trivial solution. Practical criteria based thereon entail only elementary computations involving the coefficient matrices $[A_{ - 1} ,A_0 ,A_1 ,B]$ but these computations can be tedious when $n > 3$. The condition that the two point boundary value problem have only the trivial solution is often equivalent to a much simpler condition: $K(\lambda )\mathcal{S}_\lambda ^n e 0$ f...

Journal ArticleDOI
TL;DR: In this article, the concepts of input chain and controllability chain are introduced and the structure of controllable subspaces of a linear system is investigated, and the feedback simulation problem is defined and solved.
Abstract: The concepts of input chain and controllability chain are introduced, and the structure of controllability subspaces of a linear system is investigated. It is shown that the input and controllability chains are the fundamental feedback invariants of a linear system.The feedback simulation problem (a generalization of the feedback equivalence problem) is defined and solved.

Journal ArticleDOI
TL;DR: In this paper, the Lagrange dual of control problems with linear dynamics, convex cost and convex inequality state and control constraints is analyzed, and a necessary and sufficient condition for feasible solutions in the primal and dual problems to be optimal is also given.
Abstract: The Lagrange dual of control problems with linear dynamics, convex cost and convex inequality state and control constraints is analyzed. If an interior point assumption is satisfied, then the existence of a solution to the dual problem is proved; if there exists a solution to the primal problem, then a complementary slackness condition is satisfied. A necessary and sufficient condition for feasible solutions in the primal and dual problems to be optimal is also given. The dual variables p and v corresponding to the system dynamics and state constraints are proved to be of bounded variation while the multiplier corresponding to the control constraints is proved to lie in $\mathcal{L}^1 $. Finally, a control and state minimum principle is proved. If the cost function is differentiable and the state constraints have two derivatives, then the state minimum principle implies that a linear combination of p and v satisfy the conventional adjoint condition for state constrained control problems.

Journal ArticleDOI
TL;DR: In this paper, a general class of linear infinite-dimensional systems is modeled as an abstract evolution equation, which includes linear ordinary differential equations, classes of linear partial differential equations and linear differential delay equations.
Abstract: The filtering smoothing and prediction problems are solved for a general class of linear infinite-dimensional systems. The dynamical system is modeled as an abstract evolution equation, which includes linear ordinary differential equations, classes of linear partial differential equations and linear differential delay equations. The noise process is modeled using a stochastic integral with respect to a class of Hilbert space-valued stochastic processes, which includes the Wiener process and the Poisson process as special cases. The observation process is finite-dimensional and is corrupted by Gaussian-type white noise, which is modeled using the Wiener integral. The theory is illustrated by an application to an environmental problem.

Journal ArticleDOI
TL;DR: In this paper, a two-point boundary value problem is considered and conditions are given under which there exists an outer solution and a left and right boundary-layer solution whose sum constitutes a solution of the system which degenerates uniformly on compact sets to the reduced solution.
Abstract: This paper considers a two-point boundary value problem which arises from an application of the Pontryagin maximal principle to some underlying optimal control problem. The system depends singularly upon a small parameter, $\varepsilon $. It is assumed that there exists a continuous solution of the system when $\varepsilon = 0$, known as the reduced solution. Conditions are given under which there exists an “outer solution”, and “left and right boundary-layer solutions” whose sum constitutes a solution of the system which degenerates uniformly on compact sets to the reduced solution. The principal tool used in the proof is a Banach space implicit function theorem.

Journal ArticleDOI
TL;DR: In this paper, the authors present a framework for the study of the convergence properties of optimal control algorithms and illustrate its use by means of two examples. The framework consists of an algorithm prototype with a convergence theorem, together with some results in relaxed controls theory.
Abstract: This paper presents a framework for the study of the convergencee properties of optimal control algorithms and illustrates its use by means of two examples. The framework consists of an algorithm prototype with a convergence theorem, together with some results in relaxed controls theory.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the local controllability of a rectangular parallelopiped system is locally controllable, i.e., the set of states in a certain function space which can be reached from $(0,0)$ in a finite time $T < \infty$ using boundary controls is an open neighborhood of $( 0, 0) in that function space.
Abstract: On a rectangular parallelopiped in $R^N $, $N \geqq 2$, we consider the equation $u_{tt} = \Delta u + f(u,u_t )$, where f is a nonlinear perturbation meeting certain conditions. We prove that the above system is locally controllable at $u = 0$, $u_t = 0$; i.e., the set of states in a certain function space which can be reached from $(0,0)$ in a finite time $T < \infty $ using boundary controls is an open neighborhood of $(0,0)$ in that function space. These results generalize to the nonlinear case conclusions obtained by Russell for the linear wave equation, in which global controllability was established.

Journal ArticleDOI
TL;DR: Multistage stochastic programming with recourse is defined recursively as a natural extension of two-stage stochastics with recourse, and some existing results for two- stage problems are examined.
Abstract: Multistage stochastic programming with recourse is defined recursively as a natural extension of two-stage stochastic programming with recourse. Some existing results for two-stage problems are ext...

Journal ArticleDOI
TL;DR: In this article, a theory of infinite-dimensional time-invariant continuous-time systems is developed in terms of modules defined over a convolution ring of generalized functions, where input/output operators are formulated as module homomorphisms between free modules over the convolution.
Abstract: A theory of infinite-dimensional time-invariant continuous-time systems is developed in terms of modules defined over a convolution ring of generalized functions. In particular, input/output operators are formulated as module homomorphisms between free modules over the convolution ring, and systems are defined in terms of a state module. Results are presented on causality and the problem of realization. The module framework is then utilized to study the reachability and controllability of states and outputs: New results are obtained on the smoothness of controls, bounded-time controls, and minimal-time controls.

Journal ArticleDOI
TL;DR: In this paper, the authors present conditions which guarantee that the control strategies adopted by N players constitute an efficient solution, an equilibrium, or a core solution. But they do not consider the case where all players have perfect information.
Abstract: The paper presents conditions which guarantee that the control strategies adopted by N players constitute an efficient solution, an equilibrium, or a core solution. The system dynamics are described by an Ito equation, and all players have perfect information. When the set of instantaneous joint costs and velocity vectors is convex, the conditions are necessary.

Journal ArticleDOI
TL;DR: Three computational methods which extend to nonlinearly constrained minimization problems the efficient convergence properties of, respectively, the method of steepest descent, the variable metric method, and Newton’s method for unconstrained minimization are presented.
Abstract: This paper presents three computational methods which extend to nonlinearly constrained minimization problems the efficient convergence properties of, respectively, the method of steepest descent, the variable metric method, and Newton’s method for unconstrained minimization. Development of the algorithms is based on use of the implicit function theorem to essentially convert the original constrained problem to an unconstrained one. This approach leads to practical and efficient algorithms in the framework of Abadie’s generalized reduced gradient method. To achieve efficiency, it is shown that it is necessary to construct a sequence of approximations to the Lagrange multipliers of the problem simultaneously with the approximations to the solution itself. In particular, the step size of each iteration must be determined by a linesearch for a minimum of an approximate Lagrangian function.

Journal ArticleDOI
TL;DR: In this article, the authors prove the maximum principle for parabolic operators in the setting of the exponential functionals which express the derivatives of measures induced by translations in Wiener space.
Abstract: Consider the stochastic control problem of minimizing the final value expectation $El(k'z_1 )$ by choosing a measurable control law $u( \cdot , \cdot )$, subject to the stochastic differential equation $dz_t = A(t)z_t dt + B(t)u(t,z_t )dt + C(t)dw_t $, $0 \leqq t \leqq 1$, for the process z, and to the boundedness condition $u:[0,1] \times R^d \to [ - 1,1]^r $, with w a Wiener process, $k e 0$ a given vector, and $l( \cdot )$ an even positive function increasing in $x > 0$. C. G. Hilborn, Jr. and others have conjectured that one optimal law takes the form of full “bang” in the direction of reducing the “predicted miss”, defined as the expected value of $k'z_1 $ with control identically zero. Using the maximum principle for parabolic operators, we prove this conjecture in the setting of the exponential functionals which express the derivatives of measures induced by translations in Wiener space.


Journal ArticleDOI
TL;DR: In this paper, the authors study the attainable set and derive necessary conditions for relaxed, original and strictly original minimum in control problems defined by ordinary differential equations with unilateral restrictions, where the functions defining the problem are assumed to be Lipschitz-continuous in their dependence on the state variables except for the unilateral restriction where continuous differentiability is also required.
Abstract: We study the attainable set and derive necessary conditions for relaxed, original and strictly original minimum in control problems defined by ordinary differential equations with unilateral restrictions. The functions defining the problem are assumed to be Lipschitz-continuous in their dependence on the state variables except for the unilateral restriction where continuous differentiability is also required. We define an extremal control as one satisfying a generalized Pontryagin maximum principle, with set-valued “derivate containers” replacing nonexistent derivatives. We prove that a nonextremal control (either original or relaxed) yields an interior point of the attainable set generated by original controls, and that, in normal problems, a minimizing original solution must also be a minimizing relaxed solution. The proofs are carried out with the help of an inverse function theorem for Lipschitz-continuous functions that is formulated in terms of derivate containers.