scispace - formally typeset
Search or ask a question

Showing papers in "Siam Journal on Control and Optimization in 1977"


Journal ArticleDOI
TL;DR: In this article, the problem of regulating the output of a linear time-invariant system subjected to disturbance and reference signals is considered and a new and simpler algebraic solution is given.
Abstract: The problem is considered of regulating in the face of parameter uncertainty the output of a linear time-invariant system subjected to disturbance and reference signals. This problem has been solved by other researchers. In this paper a new and simpler algebraic solution is given.

1,089 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce semismooth and semiconvex functions and discuss their properties with respect to nonsmooth nonconvex constrained optimization problems and give a chain rule for generalized gradients.
Abstract: We introduce semismooth and semiconvex functions and discuss their properties with respect to nonsmooth nonconvex constrained optimization problems. These functions are locally Lipschitz, and hence have generalized gradients. The author has given an optimization algorithm that uses generalized gradients of the problem functions and converges to stationary points if the functions are semismooth. If the functions are semiconvex and a constraint qualification is satisfied, then we show that a stationary point is an optimal point. We show that the pointwise maximum or minimum over a compact family of continuously differentiable functions is a semismooth function and that the pointwise maximum over a compact family of semiconvex functions is a semiconvex function. Furthermore, we show that a semismooth composition of semismooth functions is semismooth and gives a type of chain rule for generalized gradients.

830 citations


Journal ArticleDOI
TL;DR: In this article, Kuratowski and Ryll-Nardzewski showed that the existence off problem can be solved by lifting F in a natural way to a map into the closed sets of a Polish space.
Abstract: Suppose $(T,\mathcal{M})$ is a measurable space, X is a topological space, and $\emptyset e F(t) \subset X$ for $t \in T$. Denote ${\operatorname {Gr}}F = \{ (t,x):x \in F(t)\} $. The problem surveyed (reviewing work of others) is that of existence off: $f:T \to X$ such that $f(t) \in F(t)$ for $t \in T$ and $f^{ - 1} (U) \in \mathcal{M}$ for open $U \subset X$. The principal conditions that yield such f are (i) X is Polish, each $F(t)$ is closed, and $\{ t:F(t) \cap U e \emptyset \} \in \mathcal{M}$ a .tit whenever $U \subset X$ is open (Kuratowski and Ryll-Nardzewski and, under stronger assumption, Castaing), or (ii) T is a Hausdorff space, ${\operatorname {Gr}}F$ is a continuous image of a Polish space, and M is the $\sigma $-algebra of sets measurable with respect to an outer measure, among which are the open sets of T (primarily von Neumann). The latter result follows from the former by lifting F in a natural way to a map into the closed sets of a Polish space. This procedure leads to the theory ...

553 citations


Journal ArticleDOI
TL;DR: The high order maximal principle (HMP) as mentioned in this paper is a generalization of the Pontryagin maximal principle, which was first proposed in [11] and has been used for control variational optimization.
Abstract: The high order maximal principle (HMP) which was announced in [11] is a generalization of the familiar Pontryagin maximal principle. By using the higher derivatives of a large class of control vari...

392 citations


Journal ArticleDOI
TL;DR: Proper efficient points (Pareto maxima) are defined in tangent cone terms and characterized by the existence of equivalent real-valued maximization problems as mentioned in this paper, and they are characterized by their existence in the real world.
Abstract: Proper efficient points (Pareto maxima) are defined in tangent cone terms and are characterized by the existence of equivalent real-valued maximization problems.

297 citations


Journal ArticleDOI
TL;DR: In this article, the duality relationship between observation and control in an abstract Banach space setting is explored and preservation of observability and controllability in the presence of certain perturbations is studied in the context of differential equations in Banach spaces.
Abstract: This report explores the duality relationships between observation and control in an abstract Banach space setting. Preservation of observability and controllability in the presence of certain perturbations is studied in the context of differential equations in Banach space. Some attention is also given to the problem of optimal reconstruction of system states from observations.

271 citations


Journal ArticleDOI
TL;DR: In this paper, a necessary and sufficient condition for the integral functional (i.e., the integrand f(x, y) is lower semicontinuous with respect to some kinds of strong convergence of components and weak convergence of component f(y) is proved.
Abstract: A necessary and sufficient condition for the integral functional $I(x( \cdot ),y( \cdot )) = \smallint _G f(t,x(t),y(t))d\mu $ to be sequentially lower semicontinuous with respect to some kinds of strong convergence of $x( \cdot )$components and weak convergence of $y( \cdot )$-components is proved. It is shown how many known and new sufficient conditions can be easily derived from this result. Such properties of the integrand as measurability, lower semicontinuity in $(x,y)$ and convexity in y are also discussed. It appears that if $I( \cdot , \cdot )$ is lower semicontinuous then some other integrand $g(t,x,y)$ such that $g(t,x(t),y(t)) = f(t,x(t),y(t))$ a.e. for any measurable $x( \cdot )$, $y( \cdot )$) necessarily has these properties even if integrand f itself fails to satisfy some or any of them.

236 citations


Journal ArticleDOI
TL;DR: In this article, the authors give a simple characterization of the uniform asymptotic stability of equations in terms of Lyapunov functions and a new sufficient condition is given for uniform stability.
Abstract: In this paper we give a simple characterization of the uniform asymptotic stability of equations $\dot x = - P(t)x$ where $P(t)$ is a bounded piecewise continuous symmetric positive semi-definite matrix. In the course of developing this characterization, a new and general sufficient condition is given for uniform asymptotic stability in terms of Lyapunov functions. The stability of this type of equation has come up in various control theory contexts (identification, optimization and filtering).

233 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that exact controllability in finite time for linear control systems given on an infinite dimensional Banach space in integral form (mild solution) can never arise using locally $L_1 $-controls, if the associated $C_0 $ semigroup is compact for all $t > 0.
Abstract: It is shown that exact controllability in finite time for linear control systems given on an infinite dimensional Banach space in integral form (mild solution) can never arise using locally $L_1 $-controls, if the associated $C_0 $ semigroup is compact for all $t > 0$. This includes, in particular, the class of parabolic partial differential equations defined on bounded spatial domains.

191 citations


Journal ArticleDOI
TL;DR: A recent result on weak stabilizability of contraction semigroups over a Hilbert space was shown in this paper, where it was shown that the system where A is the infinitesimal generator of a contraction semigroup over H, and B is linear bounded, is weak stabilizable.
Abstract: A recent result on weak stabilizability is that the system $\dot x = Ax + Bu$, where A is the infinitesimal generator of a contraction semigroup over a Hilbert space H, and B is linear bounded, is ...

182 citations


Journal ArticleDOI
TL;DR: In this paper, necessary conditions for the switching function holding at junction points of optimal interior and boundary arcs or at contact points with the boundary are given, where the transition from unconstrained to constrained extremals is discussed with respect to the order p of the state constraint.
Abstract: Necessary conditions for the switching function, holding at junction points of optimal interior and boundary arcs or at contact points with the boundary, are given. These conditions are used to derive necessary conditions for the optimality of junctions between interior and boundary arcs. The junction theorems obtained are similar to those developed for singular control problems in [1] and establish a duality between singular control problems and control problems with bounded state variables and control appearing linearly. The transition from unconstrained to constrained extremals is discussed with respect to the order p of the state constraint. A numerical example is given where the adjoins variables are not unique but form a convex set which is determined numerically.

Journal ArticleDOI
TL;DR: In this paper, the identifiability of spatially varying and constant parameters of the system described by a linear, 1-dimensional, parabolic partial differential equation is studied for both distributed and pointwise measurements.
Abstract: For the parameter identification process to minimize the difference between the system output and the model output, this paper discusses the identifiability of spatially varying and constant parameters of the system described by a linear, 1-dimensional, parabolic partial differential equation. Only the parameters in the system equation (not in the boundary condition) are assumed to be unknown and the identifiability in the deterministic sense is treated. For both cases of distributed and pointwise measurements, several results for the parameter identifiability and nonidentifiability are obtained. As a result, the identifiability conditions depend on the profile of the state of the model for the case of the distributed measurement, while, for the case of the pointwise measurement, such conditions depend on the position of a detector and the form of initial or input functions. The results are represented in terms of a priori known quantities and are easily applied to practical problems.

Journal ArticleDOI
TL;DR: An abstract model is proposed for the problem of optimal control of systems subject to random perturbations, for which the principle of optimality takes on an appealing form and the additional structure permits operationally useful optimality conditions.
Abstract: The paper proposes an abstract model for the problem of optimal control of systems subject to random perturbations, for which the principle of optimality takes on an appealing form. This model is specialized to the case where the state of the controlled system is realized as a jump process. The additional structure permits operationally useful optimality conditions. Some illustrative examples are solved.

Journal ArticleDOI
TL;DR: By an effective extension of the conjugate function concept a general framework for duality-stability relations in nonconvex optimization problems can be studied and the results obtained show strong correspondences with the duality theory for convex minimization problems.
Abstract: By an effective extension of the conjugate function concept a general framework for duality-stability relations in nonconvex optimization problems can be studied. The results obtained show strong correspondences with the duality theory for convex minimization problems. In specializations to mathematical programming problems the canonical Lagrangian of the model appears as the extended Lagrangian considered in exterior penalty function methods.

Journal ArticleDOI
TL;DR: In this article, a study of stability and differential stability in nonconvex programming with equality and inequality constraints is presented, where upper and lower bounds for the potential directional derivatives of the perturbation function (or the extremal value function) are obtained' with the help of a constraint qualification which is shown to be necessary and sufficient to have bounded multipliers.
Abstract: This paper consists of a study of stability and differential stability in nonconvex programming. For a program with equality and inequality constraints, upper and lower bounds are estimated for the potential directional derivatives of the perturbation function (or the extremal-value function). These results are obtained' with the help of a constraint qualification which is shown to be necessary and sufficient to have bounded multipliers. New results on the continuity of the perturbation function are also obtained.

Journal ArticleDOI
TL;DR: In this paper, the authors define the solution to a stochastic differential equation to be the solution of the martingale problem and obtain results on the existence of an optimal stationary control for the average cost per unit time problem, a necessary and sufficient condition for optimality of a control, and other related results.
Abstract: Defining the solution to a stochastic differential equation to be the solution to the martingale problem of Strook and Varadhan, we obtain results on the existence of an optimal stationary control for the average cost per unit time problem, a necessary and sufficient condition for optimality of a control, and a number of other related results.

Journal ArticleDOI
TL;DR: In this article, the authors examined exhaustively the role of first order necessary conditions in answering the question of whether time-dependent periodic control yields better process performance than optimal steady-state control.
Abstract: Does time-dependent periodic control yield better process performance than optimal steady-state control? This paper examines exhaustively the role of first order necessary conditions in answering this question. For processes described by autonomous, ordinary differential equations, a very general optimal periodic control problem (OPC) is formulated. By considering control and state functions which are constant, a finite-dimensional optimal steady-state problem (OSS) is obtained from OPC. Three solution sets are introduced: $\mathcal{S}$(OSS)—the solutions of OSS, $\mathcal{S}$(OPC)—the solutions of OPC, $\mathcal{S}$(NCOSS)—the solutions of OPC which are constant. Necessary conditions for elements of each of these sets are derived; their solution sets are denoted, respectively, by $\mathcal{S}$(NCOSS), $\mathcal{S}$(NCOPC), and $\mathcal{S}$(NCSSOPC). The relationship between these six solutions sets is a central issue. Under various hypotheses certain pair-wise inclusions of the six sets are determined a...

Journal ArticleDOI
TL;DR: In this paper, the optimal control of nonlinear dynamical systems on a finite time interval is considered and the existence of a solution is proved and a power series solution of both the problems is constructed.
Abstract: In this paper the optimal control of nonlinear dynamical systems on a finite time interval is considered. The free end-point problem as well as the fixed end-point problem is studied. The existence of a solution is proved and a power series solution of both the problems is constructed.

Journal ArticleDOI
TL;DR: In this article, a short review of known penalty techniques, some properties of the projection on a cone, basic properties of penalty functionals for general optimization problem and duality theory for nonconvex problems in infinit...
Abstract: Each element p of a real Hilbert space H can be uniquely decomposed into two orthogonal components, $p = p^D + p^{ - D^ * } $ where $p^D \in D$ is the projection of p on a closed convex cone D and $p^{ - D^ * } $ is the projection of p on the minus dual cone $ - D^ * $. Hence, if the cone D generates a partial order in H, then the positive part $p^D $ and the negative part $p^{ - D^ * } $ of each $p \in H$ can be distinguished. For a general optimization problem: minimize $Q(y)$ over $Y_p = \{ y \in E:p - P(y) \in D \subset H\} $, where $Q:E \to R$, $P:E \to H$, E is Banach, H is Hilbert: the violation of the constraint can be determined by ($(p - P(y))^{ - D^ * } $. Hence a generalized penalty functional and an augmented Lagrange functional can be defined for this problem. The paper presents a short review of known penalty techniques, some properties of the projection on a cone, basic properties of penalty functionals for a general optimization problem and duality theory for nonconvex problems in infinit...

Journal ArticleDOI
TL;DR: In this article, the authors consider a class of nonlinear programs in which the imposition of integrality constraints on the variables makes it possible to solve the problem by a single, easily-constructed linear program.
Abstract: Although the addition of integrality constraints to the existing constraints of an optimization problem will, in general, make the determination of an optimal solution more difficult, we consider here a class of nonlinear programs in which the imposition of integrality constraints on the variables makes it possible to solve the problem by a single, easily-constructed linear program. The class of problems addressed has a separable convex objective function and a totally unimodular constraint matrix. Such problems arise in logistic and personnel assignment applications.

Journal ArticleDOI
TL;DR: A general duality theory is given for smooth nonconvex optimization problems, covering both the finite-dimensional case and the calculus of variations.
Abstract: A general duality theory is given for smooth nonconvex optimization problems, covering both the finite-dimensional case and the calculus of variations. The results are quite similar to the convex case; in particular, with every problem $(\mathcal{P})$ is associated a dual problem $(\mathcal{P}^ * )$ having opposite value. This is done at the expense of broadening the framework from smooth functions $\mathbb{R}^n \to \mathbb{R}$ to Lagrangian submanifolds of $\mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R}$.

Journal ArticleDOI
TL;DR: In this paper, the optimal control of a stochastic system with both complete and partial observations is considered, and it is shown that, almost surely, the optimum control should minimize the conditional expectation of a certain Hamiltonian, with respect to an optimum measure and the observed $\sigma $-field.
Abstract: The optimal control of a stochastic system with both complete and partial observations is considered. In the completely observable case, because the cost function is, in the terminology of Meyer, a “semimartingale speciale,” a dynamic programming condition for the optimal control is obtained in terms of a certain Hamiltonian. The partially observable case is then discussed from first principles, and it is shown that, almost surely, the optimum control should minimize the conditional expectation of a certain Hamiltonian, with respect to an optimum measure and the observed $\sigma $-field.

Journal ArticleDOI
TL;DR: In this paper, the authors considered two-person deterministic nonzero-sum differential games and provided a characterization of all Nash equilibrium solutions for a particular dynamic information pattern, and proposed an optimal unique selection of an element of the Nash equilibrium set, which exhibits a robust behavior by being insensitive to additive random perturbations in the state dynamics.
Abstract: This paper is concerned with two-person deterministic nonzero-sum differential games (NZSDG) characterized by quadratic objective functionals and with state dynamics described by linear differential equations. It is first shown that such games admit uncountably many (informationally nonunique) noncooperative (Nash) equilibrium solutions when at least one of the players has access to dynamic information. We provide a characterization of all Nash equilibrium solutions to the problem for a particular dynamic information pattern, and propose an optimal unique selection of an element of the Nash equilibrium set, which exhibits a robust behavior by being insensitive to additive random perturbations in the state dynamics. We model these random perturbations as a local martingale process and obtain the abovementioned optimal Nash strategy pair as the unique noncooperative equilibrium solution of a related stochastic NZSDG. With regard to the latter, it is shown that the unique Nash equilibrium strategy of the pla...

Journal ArticleDOI
TL;DR: In this article, the optimal control of a system where the state is modeled by a homogeneous diffusion process in $R^1 $ was studied. And sufficient conditions were found to determine the optimal policy in both an infinite horizon case with discounting and a finite horizon case.
Abstract: This paper concerns the optimal control of a system where the state is modeled by a homogeneous diffusion process in $R^1 $. Each time the system is controlled a fixed cost is incurred as well as a cost which is proportional to the magnitude of the control applied. In addition to the cost of control, there are holding or carrying costs incurred which are a function of the state of the system. Sufficient conditions are found to determine the optimal control in both an infinite horizon case with discounting and a finite horizon case. In both cases the optimal policy is one of “impulse” control originally introduced by Bensoussan and Lions [2] where the system is controlled only a finite number of times in any bounded time interval and the control requires an instantaneous finite change in the state variable. The issue of the existence of such controls is not addressed.

Journal ArticleDOI
TL;DR: In this paper, the authors take as a starting point this mapping and obtain results that are applicable to a broad class of problems and apply them to the positive and negative dynamic programming models of Blackwell and Strauch.
Abstract: The structure of many sequential optimization problems over a finite or infinite horizon can be summarized in the mapping that defines the related dynamic programming algorithm. In this paper we take as a starting point this mapping and obtain results that are applicable to a broad class of problems. This approach has also been taken earlier by Denardo under contraction assumptions. The analysis here is carried out without contraction assumptions and thus the results obtained can be applied, for example, to the positive and negative dynamic programming models of Blackwell and Strauch. Most of the existing results for these models are generalized and several new results are obtained relating mostly to the convergence of the dynamic programming algorithm and the existence of optimal stationary policies.

Journal ArticleDOI
TL;DR: In this article, a theory analogous to the Krohn-Rhodes theory of finite automata is developed for systems described by a finite dimensional ODE, and it is shown that every such system with a finite-dimensional Lie algebra can be decomposed into the cascade of systems with simple or one-dimensional algebras.
Abstract: A theory analogous to the Krohn–Rhodes theory of finite automata is developed for systems described by a finite dimensional ordinary differential equation. It is shown that every such system with a finite dimensional Lie algebra can be decomposed into the cascade of systems with simple or one dimensional algebras. Moreover, in some sense these systems admit no further decomposition. No knowledge of Krohn–Rhodes theory is assumed of the reader.

Journal ArticleDOI
TL;DR: In this paper, a class of nonnegatively constrained quadratic programming subproblems are iteratively solved to obtain estimates of Lagrange multipliers and with these estimates a sequence of points which converges to the solution is generated.
Abstract: We present a class of algorithms for solving constrained optimization problems. In the algorithm nonnegatively constrained quadratic programming subproblems are iteratively solved to obtain estimates of Lagrange multipliers and with these estimates a sequence of points which converges to the solution is generated. To achieve a superlinear rate of convergence the matrix appearing in the subproblem is required to be an approximate inverse of the Hessian of the Lagrangian. Some well-known variable metric updates such as the BFGS update are employed to generate the matrix and the resulting algorithm converges locally with a superlinear rate. When the penalty Lagrangian developed by Hestenes, Powell and Rockafellar is incorporated in the algorithm it turns out to be closely related to the recently developed method of multipliers. Unlike the method of multipliers, our algorithm takes only one step in the unconstrained minimization of the penalty Lagrangian. Besides, it possesses a superlinear rate of convergenc...

Journal ArticleDOI
TL;DR: In this article, the Riccati equations for systems defined by evolution operators are generalized to distributed systems governed by partial differential equations, where the control operators act on the boundary or on submanifolds of the system region.
Abstract: In The infinite dimensional Riccati equations for systems defined by evolution operators Ruth F. Curtain and A. J. Pritchard, this Journal, 14 (1976), pp. 951–983], we have examined the linear quadratic control problem for systems described by abstract input-output relationships on Hilbert spaces, but the application of our results to distributed systems governed by partial differential equations requires that the control operators are bounded. This is a severe restriction, since for most systems of practical interest the controls will act on the boundary or on submanifolds of the system region and so unbounded operators are involved. In this paper we generalize the above work to include such control action.

Journal ArticleDOI
TL;DR: In this paper, the convergence of a steepest descent iterative procedure for determining an extremal point of a function defined on a sequentially compact convex subset of a topological vector space was proved.
Abstract: We prove the convergence of a steepest descent iterative procedure for determining an “extremal” point of a function defined on a sequentially compact convex subset of a topological vector space. We then apply this procedure to the problem of determining an extremal of a relaxed optimal control problem defined by ordinary differential equations without endpoint or unilateral restrictions.

Journal ArticleDOI
TL;DR: In this article, the separation principle is proved for a general class of linear infinite dimensional systems, which includes linear ordinary equations, classes of linear partial differential equations, and linear delay equations.
Abstract: The separation principle is proved for a general class of linear infinite dimensional systems. The dynamical system is modeled as an abstract evolution equation, which includes linear ordinary equations, classes of linear partial differential equations and linear delay equations. The noise process in the, control system is modeled using a stochastic integral with respect to a class of Hilbert space valued Gaussian stochastic processes, which includes the Wiener process as a special case. The observation process is finite dimensional and is corrupted by Gaussian type white noise, which is modeled using the Wiener integral. The cost functional to be minimized is quadratic.