scispace - formally typeset
Search or ask a question

Showing papers in "Siam Journal on Control and Optimization in 1994"


Journal ArticleDOI
TL;DR: In this paper, a maximum principle is proved for optimal control of stochastic systems with random jumps, where the control is allowed to enter into both diffusion and jump terms, and the form of the maximum principle turns out to be quite different from the one corresponding to the pure diffusion system (the word "pure" here means the absence of the jump term).
Abstract: A maximum principle is proved for optimal controls of stochastic systems with random jumps. The control is allowed to enter into both diffusion and jump terms. The form of the maximum principle turns out to be quite different from the one corresponding to the pure diffusion system (the word "pure" here means the absence of the jump term). In calculating the first-order coefficient for the cost variation, only a property for Lebesgue integrals of scalar-valued functions in the real number space ${\Cal R}$ is used. This shows that there is no essential difference between deterministic and stochastic systems as far as the derivation of maximum principles is concerned.

534 citations


Journal ArticleDOI
TL;DR: In this paper, observability and observers for general nonlinear systems were studied and a non-control-affine observer for the nonlinear system was presented. And an exponential observer for these systems was also exhibited.
Abstract: This paper deals with observability and observers for general nonlinear systems. In the non-control-affine case, we characterize systems that are observable independently of the inputs. An exponential observer for these systems is also exhibited.

465 citations


Journal ArticleDOI
TL;DR: In this article, the problem of finding the optimal sequence of starting and stopping times of a multi-activity production process, given the costs of opening, running, and closing the activities and assuming that the economic system is a stochastic process, is formulated as an extended impulse control problem and solved using stochochastic calculus.
Abstract: This paper considers the problem of finding the optimal sequence of opening (starting) and closing (stopping) times of a multi- activity production process, given the costs of opening, running, and closing the activities and assuming that the state of the economic system is a stochastic process. The problem is formulated as an extended impulse control problem and solved using stochastic calculus. As an application, the optimal starting and stopping strategy are explicitly found for a resource extraction when the price of the resource is following a geometric Brownian motion.

227 citations


Journal ArticleDOI
TL;DR: In this paper, a singular, abnormal minimizer for the Lagrange problem with linear velocity constraints and quadratic definite Lagrangian was constructed, which is a counterexample to a theorem that has appeared several times in the differential geometry literature.
Abstract: This paper constructs the first example of a singular, abnormal minimizer for the Lagrange problem with linear velocity constraints and quadratic definite Lagrangian, or, equivalently, for an optimal control system of linear controls, with $k$ controls, $n$ states, and a running cost function that is quadratic positive-definite in the controls. In the example, $k=2, n=3$, and the system is completely controllable. The example is stable: if both the control law and cost are perturbed, the singular minimizer persists. Its importance is due, in part, to the fact that it is a counterexample to a theorem that has appeared several times in the differential geometry literature. There, the problem is called the problem of finding minimizing sub-Riemannian geodesics, and it has been claimed that all minimizers are normal Pontryagin extremals [The Mathematical Theory of Optimal Processes, Wiley-Interscience, New York, 1962]. (If the number of states equals the number of controls, then the problem is that of finding Riemannian geodesics.) The main difficulty is proving minimality. To do this, the length (cost) of the abnormal is compared with all competing normal extremals. A detailed asymptotic analysis of the differential equations governing the normals shows that they are all longer.

194 citations


Journal ArticleDOI
TL;DR: The necessity of Brockett's condition for stabilizability of nonlinear systems by smooth feedback is shown in this paper, by an argument based on properties of a degree for set-valued maps, to persist when the class of controls is enlarged to include discontinuous feedback.
Abstract: The necessity of Brockett's condition for stabilizability of nonlinear systems by smooth feedback is shown, by an argument based on properties of a degree for set-valued maps, to persist when the class of controls is enlarged to include discontinuous feedback.

157 citations


Journal ArticleDOI
TL;DR: In this article, a general investment and consumption problem for a single agent who consumes and invests in a riskless asset and a risky one is examined, and the objective is to maximize the total expected discounted utility of consumption.
Abstract: The paper examines a general investment and consumption problem for a single agent who consumes and invests in a riskless asset and a risky one. The objective is to maximize the total expected discounted utility of consumption. Trading constraints, limited borrowing, and no bankruptcy are binding, and the optimization problem is formulated as a stochastic control problem with state and control constraints. It is shown that the value function is the unique smooth the associated Hamilton-Jacobi-Bellman equation and the optimal consumption and portfolios are provided in feedback form.

140 citations


Journal ArticleDOI
TL;DR: In this paper, a continuous-time linear system with finite jumps at discrete instants of time is considered and an iterative method to compute the √ L 2 -induced norm of the system with jumps is presented.
Abstract: This paper considers a continuous-time linear system with finite jumps at discrete instants of time. An iterative method to compute the ${\cal L}_2$-induced norm of a linear system with jumps is presented. Each iteration requires solving an algebraic Riccati equation. It is also shown that a linear feedback interconnection of a continuous-time finite-dimensional linear time-invariant (FDLTI) plant and a discrete-time finite-dimensional linear shift-invariant (FDLSI) controller can be represented as a linear system with jumps. This leads to an iterative method to compute the ${\cal L}_2$-induced norm of a sampled-data system.

139 citations


Journal ArticleDOI
TL;DR: The exact controllability of Schrodinger equation in bounded domains with Dirichlet boundary condition is studied in this paper, where both the boundary and internal control problems are considered.
Abstract: The exact controllability of Schrodinger equation in bounded domains with Dirichlet boundary condition is studied. Both the boundary controllability and the internal controllability problems are considered. Concerning the boundary controllability, the paper proves the exact controllability in $H^{-1}(\Omega)$ with $L^2$-boundary control. On the other hand, the exact controllability in $L^{2}(\Omega)$ is proved with $L^2$ -controls supported in a neighborhood of the boundary. Both results hold for an arbitrarily small time. The method of proof combines both the HUM (Hilbert Uniqueness Method) and multiplier techniques.

122 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied optimal control problems of the fluid flow governed by the Navier-Stokes equations and formulated two control problems in the case of the driven cavity and flow through a channel with sudden expansion.
Abstract: This paper studies optimal control problems of the fluid flow governed by the Navier--Stokes equations. Two control problems are formulated in the case of the driven cavity and flow through a channel with sudden expansion and solved successfully using a numerical optimization algorithm based on the augmented Lagrangian method. Existence and the first-order optimality condition of the optimal control are established. A convergence result on the augmented Lagrangian method for nonsmooth cost functional is obtained.

120 citations


Journal ArticleDOI
TL;DR: The results represent a direct, efficient and natural solution to Church's problem, the construction of winning strategies for two-player zero-sum $\omega$-regular games of perfect information, and the emptiness problem for automata on infinite trees.
Abstract: A problem in the control of automata on infinite strings is defined and analyzed. The key to the investigation is the development of a fixpoint characterization of the "controllability subset" of a deterministic Rabin automaton, the set of states from which the automaton can be controlled to the satisfaction of its own acceptance condition. The fixpoint representation allows straightforward computation of the controllability subset and the construction of a suitable state-feedback control for the automaton. The results have applications to control synthesis, automaton synthesis, and decision procedures for logical satisfiability; in particular, they represent a direct, efficient and natural solution to Church's problem, the construction of winning strategies for two-player zero-sum $\omega$-regular games of perfect information, and the emptiness problem for automata on infinite trees.

115 citations


Journal ArticleDOI
TL;DR: In this article, the authors give a necessary condition for the exact observability of a strongly continuous semigroup, which is related to the Hautus Lemma from finite dimensional systems theory.
Abstract: Suppose $A$ generates an exponentially stable strongly continuous semigroup on the Hilbert space $X,Y$ is another Hilbert space, and $C : D(A) \rightarrow Y$ is an admissible observation operator for this semigroup. (This means that to any initial state in $X$ we can associate an output function in $L^{2}([0,\infty),Y)$.) This paper gives a necessary condition for the exact observability of the system defined by $A$ and $C$. This condition, called (${\bf E}$), is related to the Hautus Lemma from finite dimensional systems theory. It is an estimate in terms of the operators $A$ and $C$ alone (in particular, it makes no reference to the semigroup). This paper shows that (${\bf E}$) implies approximate observability and, if $A$ is bounded, it implies exact observability. This paper conjectures that (${\bf E}$) is in fact equivalent to exact observability. The paper also shows that for diagonal semigroups, (${\bf E}$) takes on a very simple form, and applies the results to sequences of complex exponential functions.

Journal ArticleDOI
TL;DR: In this article, a Mayer optimal control problem with a convex-valued differential inclusion is considered, and necessary conditions are proved incorporating the Hamiltonian inclusion, the Euler-Lagrange inclusion, and the Weierstrass-Pontryagin maximum condition.
Abstract: A Mayer problem of optimal control, whose dynamic constraint is given by a convex-valued differential inclusion, is considered. Both state and endpoint constraints are involved. Necessary conditions are proved incorporating the Hamiltonian inclusion, the Euler-Lagrange inclusion, and the Weierstrass-Pontryagin maximum condition. These results weaken the hypotheses and strengthen the conclusions of earlier works. Their main focus is to allow the admissible velocity sets to be unbounded, provided they satisfy a certain continuity hypothesis. They also sharpen the assertion of the Euler-Lagrange inclusion by replacing Clarke's subgradient of the essential Lagrangian with a subset formed by partial convexification of limiting subgradients. In cases where the velocity sets are compact, the traditional Lipschitz condition implies the continuity hypothesis mentioned above, the assumption of "integrable boundedness" is shown to be superfluous, and this refinement of the Euler-Lagrange inclusion remains a strict improvement on previous forms of this condition.

Journal ArticleDOI
TL;DR: In this paper, a solution of the feedback stabilization problem over commutative rings for matrix transfer functions is provided, which is realized as local stabilizability over the entire spectrum of the ring.
Abstract: This paper provides a solution of the feedback stabilization problem over commutative rings for matrix transfer functions. Stabilizability of a transfer matrix is realised as local stabilizability over the entire spectrum of the ring. For stabilizable plants, certain modules generated by its fractions and that of the stabilizing controller are shown to be projective compliments of each other. In the case of rings with irreducible spectrum, this geometric relationship shows that the plant is stabilizable if and only if the above modules of the plant are projective of ranks equal to the number of inputs and the outputs. If the maxspectrum of the ring is Noetherian and of zero (Krull) dimension, then this result shows that the stabilizable plants have doubly coprime fractions. Over unique factorization domains the above stabilizability condition is interpreted in terms of matrices formed by the fractions of the plant. Certain invariants of these matrices known as elementary factors, are defined and it is shown that the plant is stabilizable if and only if these elementary factors generate the whole ring. This condition thus provides a generalization of the coprime factorizability as a condition for stabilizabilty. A formula for the class of all stabilizing controllers is then developed that generalizes the previous well-known formula in factorization theory. For multidimensional transfer functions these results provide concrete conditions for stabilizabilty. Finally, it is shown that the class of polynomial rings over principal ideal domains is an additional class of rings over which stabilizable plants always have doubly coprime fractions.

Journal ArticleDOI
TL;DR: It is shown that for a quite general class of random matrices of interest, the stability of such a vector equation can be guaranteed by that of a corresponding scalar linear equation, for which various results are given without requiring stationary or mixing conditions.
Abstract: First, the paper gives a stability study for the random linear equation $X_{n+1}=(I-A_{n})x_n$. It is shown that for a quite general class of random matrices $\{A_n\}$ of interest, the stability of such a vector equation can be guaranteed by that of a corresponding scalar linear equation, for which various results are given without requiring stationary or mixing conditions. Then, these results are applied to the main topic of the paper, i.e., to the estimation of time varying parameters in linear stochastic systems, giving a unified stability condition for various tracking algorithms including the standard Kalman filter, least mean squares, and least squares with forgetting factor.

Journal ArticleDOI
TL;DR: In this paper, a complete study of second-order conditions for the optimal control problem with mixed state-control constraints is conducted and a necessary condition in terms of the corresponding Riccati equation is obtained.
Abstract: The goal of this paper is to conduct a complete study of second-order conditions for the optimal control problem with mixed state-control constraints. The conjugate point theory is presented and a necessary condition in terms of the corresponding Riccati equation is obtained. Sufficiency criteria are developed in terms of strengthened necessary conditions, including the Riccati equation. The results generalize the known ones for pure control constraints as well as for the mixed state-control constraints.

Journal ArticleDOI
TL;DR: In this article, it was shown that linearized control systems around the trajectories of a control system have the same accessibility algebra as control systems with feedback laws such that the control system does not vanish.
Abstract: For a control system $\dot{x} = f(x,u)$, the author proves that, for generic feedback laws $u$ such that $f(x,u(x))$ does not vanish, the linearized control systems around the trajectories of $\dot{x} = f(x,u(x))$ have the same strong accessibility algebra as $f$. Applications are given to the smooth stabilization problem.

Journal ArticleDOI
TL;DR: In this paper, the authors extend supervisory control theory to the setting of finite-string specifications and show that every specification language contains a unique maximal controllability sublanguage, representing the least upper bound of the set of achievable closed-loop sublanguages.
Abstract: Some basic results of supervisory control theory are extended to the setting of $\omega$-languages, formal languages consisting of infinite strings. The extension permits the investigation of both liveness and safety issues in the control of discrete-event systems. A new controllability property appropriate to the infinitary setting ($\omega$-controllability) is defined; this language property captures in a natural way the limitations of available control actions. It is shown that every specification language contains a unique maximal $\omega$-controllable sublanguage, representing the least upper bound of the set of achievable closed-loop sublanguages. This supremal $\omega$-controllable sublanguage allows a simple formulation of necessary and sufficient conditions for the solvability of an infinitary supervisory control problem. The problems of effectively deciding solvability of the control problem and of effectively synthesizing appropriate supervisors are solved for the case where the plant is represented by a deterministic Buchi automaton and the specification of legal behavior by a deterministic Rabin automaton.

Journal ArticleDOI
TL;DR: In this paper, a stability radius for a wide class of linear infinite-dimensional time-varying systems under structured time- varying perturbations is introduced, and a framework is presented that allows the same degree of unboundedness in the perturbation as in the generator of the nominal model.
Abstract: This paper introduces a stability radius for a wide class of linear infinite-dimensional time-varying systems under structured time- varying perturbations. A framework is presented that allows the same degree of unboundedness in the perturbations as in the generator of the nominal model.

Journal ArticleDOI
TL;DR: In this article, an adaptive control problem for the boundary or the point control of a linear stochastic distributed parameter system is formulated and solved, where the unknown parameters in the model appear affinely in both the infinitesimal generator of the semigroup and the linear transformation of the control.
Abstract: An adaptive control problem for the boundary or the point control of a linear stochastic distributed parameter system is formulated and solved in this paper. The distributed parameter system is modeled by an evolution equation with an infinitesimal generator for an analytic semigroup. Since there is boundary or point control, the linear transformation for the control in the state equation is also an unbounded operator. The unknown parameters in the model appear affinely in both the infinitesimal generator of the semigroup and the linear transformation of the control. Strong consistency is verified for a family of least squares estimates of the unknown parameters. An Ito formula is established for smooth functions of the solution of this linear stochastic distributed parameter system with boundary or point control. The certainty equivalence adaptive control is shown to be self-tuning by using the continuity of the solution of a stationary Riccati equation as a function of parameters in a uniform operator topology. For a quadratic cost functional of the state and the control, the certainty equivalence control is shown to be self-optimizing; that is, the family of average costs converges to the optimal ergodic cost. Some examples of stochastic parabolic problems with boundary control and a structurally damped plate with random loading and point control are described that satisfy the assumptions for the adaptive control problem solved in this paper.

Journal ArticleDOI
TL;DR: Second-order necessary conditions for nonsmooth infinite-dimensional optimization problems with Banach space-valued equality and real-valued inequality constraints were developed in this article, where the objective function is the maximum over a parameter of functions.
Abstract: This paper develops second-order necessary conditions for nonsmooth infinite-dimensional optimization problems with Banach space-valued equality and real-valued inequality constraints. Another constraint in the form of a closed convex set is also present. The objective function is the maximum over a parameter of functions $f(t,z)$ that are Lipschitz in $z$ and upper semicontinuous in $t$. The inequality constraints $g(s,z)$ depend on a parameter $s$. The technique we use is a generalization of that of Dubovitskii and Milyutin. The second-order conditions obtained here are in terms of a certain function $\sigma$ that disappears when the parameters and a certain set that derives from the given convex set are absent. The presence of the function $\sigma$ and that set is due to the envelope-like effect discovered by Kawasaki.

Journal ArticleDOI
TL;DR: In this paper, the problem of simultaneous stabilizability of a finite family of single-input, single-output time-invariant systems by a time invariant controller is studied.
Abstract: The problem of the simultaneous stabilizability of a finite family of single-input, single-output time-invariant systems by a time- invariant controller is studied. The link between stabilization and avoidance is shown and is used to derive necessary conditions for the simultaneous stabilization of k plants. These necessary conditions are proved to be, in general, not sufficient. This result also disproves a long- standing conjecture on the stabilizability condition of a single plant with a stable minimum phase controller. The main result is to show that, unlike the case of two plants, the existence of a simultaneous stabilizing controller for more than two plants is not guaranteed by the existence of a controller such that the closed loops have no real unstable poles.

Journal ArticleDOI
TL;DR: In this article, a compactification of the space of proper transfer functions with a fixed McMillan degree was introduced, which has the structure of a projective variety and each point of this variety can be given an interpretation as a certain autoregressive system in the sense of Willems.
Abstract: This paper introduces a compactification of the space of proper $p\times m$ transfer functions with a fixed McMillan degree $n$. Algebraically, this compactification has the structure of a projective variety and each point of this variety can be given an interpretation as a certain autoregressive system in the sense of Willems. It is shown that the pole placement map with dynamic compensators turns out to be a central projection from this compactification to the space of closed-loop polynomials. Using this geometric point of view, necessary and sufficient conditions are given when a strictly proper or proper system can be generically pole assigned by a complex dynamic compensator of McMillan degree $q$.

Journal ArticleDOI
TL;DR: In this article, a convex optimization procedure is proposed to determine the scalings that minimize the Euclidean condition number, which can be solved in polynomial-time with off-the-shelf software.
Abstract: This paper considers the problem of determining the row and/or column scaling of a matrix $A$ that minimizes the condition number of the scaled matrix. This problem has been studied by many authors. For the cases of the $\infty$-norm and the 1-norm, the scaling problem was completely solved in the 1960s. It is the Euclidean norm case that has widespread application in robust control analyses. For example, it is used for integral controllability tests based on steady-state information, for the selection of sensors and actuators based on dynamic information, and for studying the sensitivity of stability to uncertainty in control systems. Minimizing the scaled Euclidean condition number has been an open question---researchers proposed approaches to solving the problem numerically, but none of the proposed numerical approaches guaranteed convergence to the true minimum. This paper provides a convex optimization procedure to determine the scalings that minimize the Euclidean condition number. This optimization can be solved in polynomial-time with off-the-shelf software.

Journal ArticleDOI
TL;DR: In this article, all finite-dimensional algebras with maximal rank are classified if the dimension of the state space is less than or equal to two and therefore, from the Lie algebraic point of view, all finite dimensional nonlinear filters are understood generically in the case of state space dimension less than three.
Abstract: The idea of using estimation algebras to construct finite- dimensional nonlinear filters was first proposed by Brockett and Mitter independently. It turns out that the concept of estimation algebra plays a crucial role in the investigation of finite-dimensional nonlinear filters. In his talk at the International Congress of Mathematics in 1983, Brockett proposed classifying all finite-dimensional estimation algebras. In this paper, all finite-dimensional algebras with maximal rank are classified if the dimension of the state space is less than or equal to two. Therefore, from the Lie algebraic point of view, all finite-dimensional filters are understood generically in the case where the dimension of state space is less than three.

Journal ArticleDOI
TL;DR: In this paper, the Lipschitz constants for basic optimal solutions and basic feasible solutions of linear programs with respect to right-hand side perturbations are given in terms of norms of pseudoinverses of submatrices of the matrices involved.
Abstract: The main purpose of this paper to give Lipschitz constants for basic optimal solutions (or vertices of solution sets) and basic feasible solutions (or vertices of feasible sets) of linear programs with respect to right-hand side perturbations. The Lipschitz constants are given in terms of norms of pseudoinverses of submatrices of the matrices involved, and are sharp under very general assumptions. There are two mathematical principles involved in deriving the Lipschitz constants: (1) the local upper Lipschitz constant of a Hausdorff lower semicontinuous mapping is equal to the Lipschitz constant of the mapping and (2) the Lipschitz constant of a finite- set-valued mapping can be inherited by its continuous submappings. Moreover, it is proved that any Lipschitz constant for basic feasible solutions can be used as an Lipschitz constant for basic optimal solutions, feasible solutions, and optimal solutions.

Journal ArticleDOI
TL;DR: In this article, the authors presented various novel and extended results on least squares based adaptive minimum variance control for linear stochastic systems, and established self-optimality, self-tuning property, and the best possible convergence rate of the control law in a variety of situations of interest.
Abstract: Based on the recently established results on self-tuning regulators originally proposed by Astrom and Wittenmark, this paper presents various novel and extended results on least squares based adaptive minimum variance control for linear stochastic systems. These results establish self-optimality, self-tuning property, and the best possible convergence rate of the control law in a variety of situations of interest.

Journal ArticleDOI
TL;DR: In this article, the problem of controlling a Markov chain on a countable state space with ergodic or "long run average" cost is studied in the presence of additional constraints, requiring finitely many (e.g., $m$) other costs to satisfy prescribed bounds.
Abstract: The problem of controlling a Markov chain on a countable state space with ergodic or 'long run average' cost is studied in the presence of additional constraints, requiring finitely many (say, $m$) other ergodic costs to satisfy prescribed bounds. Under extremely general conditions, it is proved that an optimal stationary randomized strategy can be found that requires at most $m$ randomizations. This generalizes a result of Ross.

Journal ArticleDOI
TL;DR: In this paper, a dissipative feedback control synthesis is constructed to regulate the systems arising in fluid dynamics by utilizing nonlinear dynamic programming techniques, and the control law is designed for driving the system to a prescribed equilibrium state and enhancing the energy dissipation effects of the dynamic system.
Abstract: A dissipative feedback control synthesis is constructed to regulate the systems arising in fluid dynamics. The feedback law is obtained by utilizing nonlinear dynamic programming techniques. The control law is designed for driving the system to a prescribed equilibrium state and enhancing the energy dissipation effects of the dynamical system. Two-dimensional Navier--Stokes equations and Burgers equation are used for numerical experiments to illustrate the effects of the feedback synthesis and the theoretical results.

Journal ArticleDOI
TL;DR: In this article, the authors considered Maxwell's equations with dissipative boundary conditions under certain geometric conditions imposed on the domain $\Omega$, the results on uniform stabilization of the solutions are established Exact boundary controllability is then obtained through Russell's "controllability via stabilizability" principle
Abstract: This paper considers Maxwell's equations with dissipative boundary conditions Under certain geometric conditions imposed on the domain $\Omega$, the results on uniform stabilization of the solutions are established Exact boundary controllability is then obtained through Russell's "controllability via stabilizability" principle

Journal ArticleDOI
TL;DR: In this article, it was shown that the optimal solution and the adjoint multipliers are differentiable functions of the parameter and that the solution differentiability provides a firm theoretical basis for numerical feedback schemes that have been developed for computing neighbouring extremals.
Abstract: Perturbed nonlinear control problems with data depending on a vector parameter are considered. Using second-order sufficient optimality conditions, it is shown that the optimal solution and the adjoint multipliers are differentiable functions of the parameter. The proof exploits the close connections between solutions of a Riccati differential equation and shooting methods for solving the associated boundary value problem. Solution differentiability provides a firm theoretical basis for numerical feedback schemes that have been developed for computing neighbouring extremals. The results are illustrated by an example that admits two extremal solutions. Second-order sufficient conditions single out one optimal solution for which a sensitivity analysis is carried out.