scispace - formally typeset
Search or ask a question

Showing papers in "Siam Journal on Control and Optimization in 2003"


Journal ArticleDOI
TL;DR: This article proposes and analyzes a class of actor-critic algorithms in which the critic uses temporal difference learning with a linearly parameterized approximation architecture, and the actor is updated in an approximate gradient direction, based on information provided by the critic.
Abstract: In this article, we propose and analyze a class of actor-critic algorithms. These are two-time-scale algorithms in which the critic uses temporal difference learning with a linearly parameterized approximation architecture, and the actor is updated in an approximate gradient direction, based on information provided by the critic. We show that the features for the critic should ideally span a subspace prescribed by the choice of parameterization of the actor. We study actor-critic algorithms for Markov decision processes with Polish state and action spaces. We state and prove two results regarding their convergence.

634 citations


Journal ArticleDOI
TL;DR: A continuous-time version of the Markowitz mean-variance portfolio selection model is proposed and analyzed for a market consisting of one bank account and multiple stocks, finding that if the interest rate is deterministic, then the results exhibit (rather unexpected) similarity to their no-regime-switching counterparts, even if the stock appreciation and volatility rates are Markov-modulated.
Abstract: A continuous-time version of the Markowitz mean-variance portfolio selection model is proposed and analyzed for a market consisting of one bank account and multiple stocks. The market parameters, including the bank interest rate and the appreciation and volatility rates of the stocks, depend on the market mode that switches among a finite number of states. The random regime switching is assumed to be independent of the underlying Brownian motion. This essentially renders the underlying market incomplete. A Markov chain modulated diffusion formulation is employed to model the problem. Using techniques of stochastic linear-quadratic control, mean-variance efficient portfolios and efficient frontiers are derived explicitly in closed forms, based on solutions of two systems of linear ordinary differential equations. Related issues such as a minimum-variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those for the case when there is no regime switching. An interesting observation is, however, that if the interest rate is deterministic, then the results exhibit (rather unexpected) similarity to their no-regime-switching counterparts, even if the stock appreciation and volatility rates are Markov-modulated.

486 citations


Journal ArticleDOI
TL;DR: In this article, exponential and polynomial decay rates for the partially viscoelastic nonlinear wave equation subject to a nonlinear and localized frictional damping are shown.
Abstract: In this article we show exponential and polynomial decay rates for the partially viscoelastic nonlinear wave equation subject to a nonlinear and localized frictional damping. The equation that models this problem is given by \begin{eqnarray} u_{tt} - \kappa_0 \Delta u+{\int_0^t\, }{{\mbox{div}} [a(x)g(t-s) abla u(s)]ds} +f(u) +b(x)h(u_t)=0 \ &\mbox{in}& \ \Omega\times\Bbb R^+,\quad \end{eqnarray} where $a,b$ are nonnegative functions, $a\in C^1(\overline{\Omega})$, $ b\in L^{ \infty}(\Omega)$, satisfying the assumption \begin{eqnarray} a(x)+ b(x) \geq \delta > 0 \quad \forall x \in \Omega, \end{eqnarray} and f and h are power-like functions. We observe that the assumption (0.2) gives us a wide assortment of possibilities from which to choose the functions a(x) and b(x), and the most interesting case occurs when one has simultaneous and complementary damping mechanisms. Taking this point of view into account, a distinctive feature of our paper is exactly to consider different and localized damping mechanisms acting in the domain but not necessarily "strategically localized dissipations" as considered in the prior literature.

355 citations


Journal ArticleDOI
TL;DR: It is shown that for any given continuously differentiable function a and any given positive constant $\l$ the authors can explicitly construct a boundary feedback control law such that the solution of the equation with the control law converges to zero exponentially at the rate of $\l$.
Abstract: In this paper we study the problem of boundary feedback stabilization for the unstable heat equation ut(x,t) = uxx(x,t)+a(x) u(x,t) This equation can be viewed as a model of a heat conducting rod in which not only is the heat being diffused (mathematically due to the diffusive term uxx) but also the destabilizing heat is generating (mathematically due to the term a u with a >0) We show that for any given continuously differentiable function a and any given positive constant $\l$ we can explicitly construct a boundary feedback control law such that the solution of the equation with the control law converges to zero exponentially at the rate of $\l$ This is a continuation of the recent work of Boskovic, Krstic, and Liu [IEEE Trans Automat Control, 46 (2001), pp 2022--2028] and Balogh and Krstic [European J Control, 8 (2002), pp 165--176]

276 citations


Journal ArticleDOI
TL;DR: A systematic investigation of the notion of Bregman monotonicity leads to a simplified analysis of numerous algorithms and to the development of a new class of parallel block-iterative surrogate BRegman projection schemes.
Abstract: A broad class of optimization algorithms based on Bregman distances in Banach spaces is unified around the notion of Bregman monotonicity. A systematic investigation of this notion leads to a simplified analysis of numerous algorithms and to the development of a new class of parallel block-iterative surrogate Bregman projection schemes. Another key contribution is the introduction of a class of operators that is shown to be intrinsically tied to the notion of Bregman monotonicity and to include the operators commonly found in Bregman optimization methods. Special emphasis is placed on the viability of the algorithms and the importance of Legendre functions in this regard. Various applications are discussed.

273 citations


Journal ArticleDOI
TL;DR: The approximate controllability of the abstract semilinear deterministic and stochastic control systems under the natural assumption that the associated linear control system is approximately controllable is shown.
Abstract: Various sufficient conditions for approximate controllability of linear evolution systems in abstract spaces have been obtained, but approximate controllability of semilinear control systems usually requires some complicated and limited assumptions. In this paper, we show the approximate controllability of the abstract semilinear deterministic and stochastic control systems under the natural assumption that the associated linear control system is approximately controllable. The results are obtained using new properties of symmetric operators (which are proved in this paper), compact semigroups, the Schauder fixed point theorem, and/or the contraction mapping principle.

263 citations


Journal ArticleDOI
TL;DR: It is shown conversely that robust stability implies solvability of these LMIs from a certain rank and beyond, and this result constitutes an extension of the characterization of Lyapunov inequality of the asymptotic stability for usual linear systems.
Abstract: In this paper, robust stability for linear systems with several uncertain (complex and/or real) scalar parameters is studied. A countable family of conditions sufficient for robust stability is given, in terms of solvability of some simple linear matrix inequalities (LMIs). These conditions are of increasing precision, and it is shown conversely that robust stability implies solvability of these LMIs from a certain rank and beyond. This result constitutes an extension of the characterization by solvability of Lyapunov inequality of the asymptotic stability for usual linear systems. It is based on the search of parameter-dependent quadratic Lyapunov functions, polynomial of increasing degree in the parameters.

205 citations


Journal ArticleDOI
TL;DR: This work proves the existence and uniqueness result for the associated Riccati equation, which in the general case is a backward stochastic differential equation with the generator being highly nonlinear in the two unknown variables and solves Bismut and Peng's long-standing open problem.
Abstract: Consider the minimization of the following quadratic cost functional: $$J(u):=E\langle Mx_T,x_T\rangle +E\int_0^T(\langle Q_sx_s,x_s\rangle +\langle N_su_s,u_s\rangle )\, ds,$$ where x is the solution of the following linear stochastic control system: $$ \eq{dx_t=&(A_tx_t+B_tu_t)\, dt +\sum_{i=1}^d(C_t^ix_t+D_t^iu_t)\, dW_t^i,\cr x_0=h \cr} $$ u is a square integrable adapted process. The problem is conventionally called the stochastic LQ (the abbreviation of "linear quadratic") problem. We are concerned with the following general case: the coefficients A,B,Ci,Di, Q, N, and M are allowed to be adapted processes or random matrices. We prove the existence and uniqueness result for the associated Riccati equation, which in our general case is a backward stochastic differential equation with the generator (the drift term) being highly nonlinear in the two unknown variables. This solves Bismut and Peng's long-standing open problem (for the case of a Brownian filtration), which was initially proposed by the French mathematician J. M. Bismut [in Seminaire de Probabilites XII, Lecture Notes in Math. 649, C. Dellacherie, P. A. Meyer, and M. Weil, eds., Springer-Verlag, Berlin, 1978, pp. 180--264]. We also provide a rigorous derivation of the Riccati equation from the stochastic Hamilton system. This completes the interrelationship between the Riccati equation and the stochastic Hamilton system as two different but equivalent tools for the stochastic LQ problem. There are two key points in our arguments. The first one is to connect the existence of the solution of the Riccati equation to the homomorphism of the stochastic flows derived from the optimally controlled system. Actually, we establish their equivalence. As a consequence, we can construct solutions to a sequence of suitably modified Riccati equations in terms of the associated stochastic Hamilton systems (and the optimal controls). The second key point is to establish a new type of a priori estimate for solutions of Riccati equations, with which we show that the sequence of constructed solutions has a limit which is a solution to the original Riccati equation.

193 citations


Journal ArticleDOI
TL;DR: This work characterize different types of solutions of a vector optimization problem by means of a scalarization procedure, and considers some restricted notions of efficiency, such as strict and proper efficiency, which are characterized as Tikhonov well-posed minima and sharp minima for the scalarized problem.
Abstract: In this work we characterize different types of solutions of a vector optimization problem by means of a scalarization procedure. Usually different scalarizing functions are used in order to obtain the various solutions of the vector problem. Here we consider different kinds of solutions of the same scalarized problem. Our results allow us to establish a parallelism between the solutions of the scalarized problem and the various efficient frontiers: stronger solution concepts of the scalar problem correspond to more restrictive notions of efficiency. Besides the usual notions of weakly efficient and efficient points, which are characterized as global and strict global solutions of the scalarized problem, we also consider some restricted notions of efficiency, such as strict and proper efficiency, which are characterized as Tikhonov well-posed minima and sharp minima for the scalarized problem.

178 citations


Journal ArticleDOI
TL;DR: The results use a novel condition based on nontangency between the vector field and invariant or negatively invariant subsets of the level or sublevel sets of the Lyapunov function or its derivative and represent extensions of previously known stability results involving semidefinite Lyap unov functions.
Abstract: This paper focuses on the stability analysis of systems having a continuum of equilibria. Two notions that are of particular relevance to such systems are convergence and semistability. Convergence is the property whereby every solution converges to a limit point that may depend on the initial condition. Semistability is the additional requirement that all solutions converge to limit points that are Lyapunov stable. We give new Lyapunov-function-based results for convergence and semistability of nonlinear systems. These results do not make assumptions of sign definiteness on the Lyapunov function. Instead, our results use a novel condition based on nontangency between the vector field and invariant or negatively invariant subsets of the level or sublevel sets of the Lyapunov function or its derivative and represent extensions of previously known stability results involving semidefinite Lyapunov functions. To illustrate our results we deduce convergence and semistability of the kinetics of the Michaelis--Menten chemical reaction and the closed-loop dynamics of a scalar system under a universal adaptive stabilizing feedback controller.

137 citations


Journal ArticleDOI
TL;DR: An asymptotic expansion of a functional with respect to the creation of a small hole in the domain is obtained for the Helmholtz equation with a Dirichlet condition on the boundary of a circular hole.
Abstract: The aim of the topological sensitivity analysis is to obtain an asymptotic expansion of a functional with respect to the creation of a small hole in the domain. In this paper such an expansion is obtained for the Helmholtz equation with a Dirichlet condition on the boundary of a circular hole. Some applications of this work to waveguide optimization are presented.

Journal ArticleDOI
TL;DR: This paper studies explicit representations of the critical subspace and shows that the quadratic form can be simplified by a transformation that uses a solution to a linear matrix differential equation, which leads to an easily implementable test for SSC in the case of a bang-bang control with one or two switching points.
Abstract: We study second order sufficient optimality conditions (SSC) for optimal control problems with control appearing linearly. Specifically, time-optimal bang-bang controls will be investigated. In [N. P. Osmolovskii, Sov. Phys. Dokl., 33 (1988), pp. 883--885; Theory of Higher Order Conditions in Optimal Control, Doctor of Sci. thesis, Moscow, 1988 (in Russian); Russian J. Math. Phys., 2 (1995), pp. 487--516; {\it Russian J. Math. Phys.}, 5 (1997), pp. 373--388; Proceedings of the Conference "Calculus of Variations and Optimal Control," Chapman & Hall/CRC, Boca Raton, FL, 2000, pp. 198--216; A. A. Milyutin and N. P. Osmolovskii, Calculus of Variations and Optimal Control, Transl. Math. Monogr. 180, AMS, Providence, RI, 1998], SSC have been developed in terms of the positive definiteness of a quadratic form on a critical cone or subspace. No systematical numerical methods for verifying SSC are to be found in these papers. In the present paper, we study explicit representations of the critical subspace. This leads to an easily implementable test for SSC in the case of a bang-bang control with one or two switching points. In general, we show that the quadratic form can be simplified by a transformation that uses a solution to a linear matrix differential equation. Particular conditions even allow us to convert the quadratic form to perfect squares. Three numerical examples demonstrate the numerical viability of the proposed tests for SSC.

Journal ArticleDOI
TL;DR: It is shown that, using this approach, practical and/or semiglobal stability of the exact discrete-time model is achieved under appropriate conditions.
Abstract: We present results on numerical regulator design for sampled-data nonlinear plants via their approximate discrete-time plant models The regulator design is based on an approximate discrete-time plant model and is carried out either via an infinite horizon optimization problem or via a finite horizon with terminal cost optimization problem In both cases, we discuss situations when the sampling period T and the integration period h used in obtaining the approximate discrete-time plant model are the same or they are independent of each other We show that, using this approach, practical and/or semiglobal stability of the exact discrete-time model is achieved under appropriate conditions

Journal ArticleDOI
TL;DR: Under certain regularity conditions, it is proved that for piecewise linear interpolation, policy iteration converges quadratically and under more general conditions it is established that convergence is superlinear.
Abstract: This paper analyzes asymptotic convergence properties of policy iteration in a class of stationary, infinite-horizon Markovian decision problems that arise in optimal growth theory. These problems have continuous state and control variables and must therefore be discretized in order to compute an approximate solution. The discretization may render inapplicable known convergence results for policy iteration such as those of Puterman and Brumelle [Math. Oper. Res., 4 (1979), pp. 60--69]. Under certain regularity conditions, we prove that for piecewise linear interpolation, policy iteration converges quadratically. Also, under more general conditions we establish that convergence is superlinear. We show how the constants involved in these convergence orders depend on the grid size of the discretization. These theoretical results are illustrated with numerical experiments that compare the performance of policy iteration and the method of successive approximations.

Journal ArticleDOI
TL;DR: Regular conditional versions of the forward and inverse Bayes formula are shown to have dual variational characterizations involving the minimization of apparent information and the maximization of compatible information, according to which Bayes' formula and its inverse are optimal information processors.
Abstract: We consider estimation problems, in which the estimand, X, and observation, Y, take values in measurable spaces. Regular conditional versions of the forward and inverse Bayes formula are shown to have dual variational characterizations involving the minimization of apparent information and the maximization of compatible information. These both have natural information-theoretic interpretations, according to which Bayes' formula and its inverse are optimal information processors. The variational characterization of the forward formula has the same form as that of Gibbs measures in statistical mechanics. The special case in which X and Y are diffusion processes governed by stochastic differential equations is examined in detail. The minimization of apparent information can then be formulated as a stochastic optimal control problem, with cost that is quadratic in both the control and observation fit. The dual problem can be formulated in terms of infinite-dimensional deterministic optimal control. Local versions of the variational characterizations are developed which quantify information flow in the estimators. In this context, the information conserving property of Bayesian estimators coincides with the Davis--Varaiya martingale stochastic dynamic programming principle.

Journal ArticleDOI
TL;DR: A new numerical solution method based on a reformulation of the semi-infinite problem as a Stackelberg game and the use of regularized nonlinear complementarity problem functions to solve lower level optimization problems with convex lower level problems is introduced.
Abstract: We introduce a new numerical solution method for semi-infinite optimization problems with convex lower level problems. The method is based on a reformulation of the semi-infinite problem as a Stackelberg game and the use of regularized nonlinear complementarity problem functions. This approach leads to central path conditions for the lower level problems, where for a given path parameter a smooth nonlinear finite optimization problem has to be solved. The solution of the semi-infinite optimization problem then amounts to driving the path parameter to zero.We show convergence properties of the method and give a number of numerical examples from design centering and from robust optimization, where actually so-called generalized semi-infinite optimization problems are solved. The presented method is easy to implement, and in our examples it works well for dimensions of the semi-infinite index set at least up to 150.

Journal ArticleDOI
TL;DR: It is proved that by observing only one component, one can get back a full weakened energy of both components under a compatibility condition linking the operators of each equation and for small coupling.
Abstract: This work is concerned with the boundary observability of an abstract system of two coupled second order evolution equations, the coupling operator being a compact perturbation of the uncoupled system. We assume that only one of the two components of the unknown is observed. This is indirect observability. We prove that by observing only one component, one can get back a full weakened energy of both components under a compatibility condition linking the operators of each equation and for small coupling. Using the Hilbert uniqueness method, we then establish an indirect exact controllability result. We apply this abstract result to several coupled systems of partial differential equations (wave-wave, coupled elastodynamic systems, Petrowsky-Petrowsky, and wave-Petrowsky systems).

Journal ArticleDOI
TL;DR: The classical second order sufficient optimality conditions are verified for an optimal simply connected domain, but the value of the cost can be improved by the topology variations, and therefore the optimal solution can be substantially changed by applying the topological optimization.
Abstract: New optimality conditions are derived for a class of shape optimization problems. The conditions are established on the boundary by an application of the boundary variations technique and in the interior of an optimal domain by exploiting the topological derivative method. An example is provided for which the classical second order sufficient optimality conditions are verified for an optimal simply connected domain. However, the value of the cost can be improved by the topology variations, and therefore, the optimal solution can be substantially changed by applying the topology optimization.

Journal ArticleDOI
TL;DR: A practically favorable necessary and sufficient condition for the separability of zeros of function of sine type is derived and the result is applied to get Riesz basis generation of a coupled string equation with joint dissipative feedback control.
Abstract: Suppose that $\{\lambda _n\}$ is the set of zeros of a sine-type generating function of the exponential system $\{e^{i\lambda_n t}\}$ in L2 (0,T) and is separated. Levin and Golovin's classical theorem claims that $\{e^{i\lambda_n t}\}$ forms a Riesz basis for L2 (0,T). In this article, we relate this result with Riesz basis generation of eigenvectors of the system operator of the linear time-invariant evolution equation in Hilbert spaces through its spectrum. A practically favorable necessary and sufficient condition for the separability of zeros of function of sine type is derived. The result is applied to get Riesz basis generation of a coupled string equation with joint dissipative feedback control.

Journal ArticleDOI
TL;DR: It is shown that the control and observer designs can be combined to yield a globally stabilizing compensator and are numerically demonstrated on the problem of controlling stall in a model of axial compressors.
Abstract: We consider the problem of global stabilization of a semilinear dissipative evolution equation by finite-dimensional control with finite-dimensional outputs. Coupling between the system modes occurs directly through the nonlinearity and also through the control influence functions. Similar modal coupling occurs in the infinite-dimensional error dynamics through the nonlinearity and measurements. For both the control and observer designs, rather than decompose the original system into Fourier modes, we consider Lyapunov functions based on the infinite-dimensional dynamics of the state and error systems, respectively. The inner product terms of the Lyapunov derivative are decomposed into Fourier modes. Upper bounds on the terms representing control and observation spillover are obtained. Linear quadratic regulator (LQR) designs are used to stabilize the state and error systems with these upper bounds. Relations between system and LQR design parameters are given to ensure global stability of the state and error dynamics with robustness with respect to control and observation spillover, respectively. It is shown that the control and observer designs can be combined to yield a globally stabilizing compensator. The control and observer designs are numerically demonstrated on the problem of controlling stall in a model of axial compressors.

Journal ArticleDOI
TL;DR: A new method for the design of observers for nonlinear systems using backstepping is introduced and if the initial estimation error is not too large, then the estimation error goes to zero exponentially.
Abstract: We introduce a new method for the design of observers for nonlinear systems using backstepping. The method is applicable to a class of nonlinear systems slighter larger than those treated by Gauthier, Hammouri, and Othman [IEEE Trans. Automat. Control, 27 (1992), pp. 875--880]. They presented an observer design method that is globally convergent using high gain. In contrast to theirs, our observer is not high gain, but it is only locally convergent. If the initial estimation error is not too large, then the estimation error goes to zero exponentially. A design algorithm is presented.

Journal ArticleDOI
N. U. Ahmed1
TL;DR: This paper presents some results on the question of existence of optimal controls for a large class of semilinear impulsive systems in infinite dimensional spaces with admissible controls from the space of vector measures.
Abstract: This paper presents some results on the question of existence of optimal controls for a large class of semilinear impulsive systems in infinite dimensional spaces with admissible controls from the space of vector measures. This also includes, as a special case, the class of purely impulsive controls. Two physical examples are presented for illustration.

Journal ArticleDOI
TL;DR: Part II continues the development of policy synthesis techniques for multiclass queueing networks based upon a linear fluid model and an analogous workload-relaxation is introduced for the stochastic model.
Abstract: Part II continues the development of policy synthesis techniques for multiclass queueing networks based upon a linear fluid model. The following are shown: A relaxation of the fluid model based on workload leads to an optimization problem of lower dimension. An analogous workload-relaxation is introduced for the stochastic model. These relaxed control problems admit pointwise optimal solutions in many instances. A translation to the original fluid model is almost optimal, with vanishing relative error as the network load $\rho$ approaches one. It is pointwise optimal after a short transient period, provided a pointwise optimal solution exists for the relaxed control problem. A translation of the optimal policy for the fluid model provides a policy for the stochastic network model that is almost optimal in heavy traffic, over all solutions to the relaxed stochastic model, again with vanishing relative error. The regret is of order $|\log(1-\rho)|$.

Journal ArticleDOI
TL;DR: It is shown that, different from the robust stabilization case, it can always derive a linear controller, that is, nonlinear controllers cannot outperform linear ones for the gain scheduling problem.
Abstract: In this paper we consider the problem of stabilizing linear parameter varying (LPV) systems by means of gain scheduling control This technique amounts to designing a controller which is able to update its parameters on-line according to the variations of the plant parameters We first consider the state feedback case and show a design procedure based on the construction of a Lyapunov function for discrete-time LPV systems in which the parameter variations are affine and occur in the state matrix only This procedure produces a nonlinear static controller We show that, different from the robust stabilization case, we can always derive a linear controller, that is, nonlinear controllers cannot outperform linear ones for the gain scheduling problem Then we show that this procedure has a dual version which leads to the construction of a linear gain scheduling observer The two procedures may be combined to derive an observer-based linear gain scheduling compensator

Journal ArticleDOI
TL;DR: A possibly unbounded positive operator on the Hilbert space H, which is boundedly invertible, and C0, a bounded operator from A0, with the norm $\|z\|_{1/2}^2=\langle...$
Abstract: Let A0 be a possibly unbounded positive operator on the Hilbert space H, which is boundedly invertible. Let C0 be a bounded operator from ${\cal D}(A_0^{1/2})$ (with the norm $\|z\|_{1/2}^2=\langle...

Journal ArticleDOI
TL;DR: This work considers a stochastic control problem that has emerged in the economics literature as an investment model under uncertainty and finds that this has a priori rather unexpected features.
Abstract: We consider a stochastic control problem that has emerged in the economics literature as an investment model under uncertainty This problem combines features of both stochastic impulse control and optimal stopping The aim is to discover the form of the optimal strategy It turns out that this has a priori rather unexpected features The results that we establish are of an explicit nature We also construct an example whose value function does not possess C1 regularity

Journal ArticleDOI
TL;DR: Lyapunov-like characterizations for the concepts of nonuniform in time robust global asymptotic stability and input-to-state stability for time-varying systems are established.
Abstract: Lyapunov-like characterizations for the concepts of nonuniform in time robust global asymptotic stability and input-to-state stability for time-varying systems are established. The main result of our work enables us to derive (1) necessary and sufficient conditions for feedback stabilization for affine in the control systems and (2) sufficient conditions for the propagation of the input-to-state stability property through integrators.

Journal ArticleDOI
TL;DR: It is shown that uniform global asymptotic controllability to a closed (not necessarily compact) set for a locally Lipschitz nonlinear control system implies the existence of a locally lpschitz control-Lyapunov function, and from this function a feedback is constructed that is robust to measurement noise.
Abstract: Given a weakly uniformly globally asymptotically stable closed (not necessarily compact) set ${\cal A}$ for a differential inclusion that is defined on $\mathbb{R}^n$, is locally Lipschitz on $\mathbb{R}^n \backslash {\cal A}$, and satisfies other basic conditions, we construct a weak Lyapunov function that is locally Lipschitz on $\mathbb{R}^n$. Using this result, we show that uniform global asymptotic controllability to a closed (not necessarily compact) set for a locally Lipschitz nonlinear control system implies the existence of a locally Lipschitz control-Lyapunov function, and from this control-Lyapunov function we construct a feedback that is robust to measurement noise.

Journal ArticleDOI
TL;DR: This paper attempts to solve the problem of existence and uniqueness of solutions to the indefinite SREs for a number of special, yet important, cases
Abstract: This paper is concerned with stochastic Riccati equations (SREs), which are a class of matrix-valued, nonlinear backward stochastic differential equations (BSDEs). The SREs under consideration are, in general, indefinite, in the sense that certain parameter matrices are indefinite. This kind of equations arises from the stochastic linear-quadratic (LQ) optimal control problem with random coefficients and indefinite state and control weighting costs, the latter having profound implications in both theory and applications. While the solvability of the SREs is the key to solving the indefinite stochastic LQ control, it remains, in general, an extremely difficult, open problem. This paper attempts to solve the problem of existence and uniqueness of solutions to the indefinite SREs for a number of special, yet important, cases.

Journal ArticleDOI
TL;DR: The approach transforms the question of continuity of the input/output map of a boundary control system into boundedness of the solution to a related elliptic problem.
Abstract: Continuity of the input/output map for boundary control systems is shown through the system transfer function. Our approach transforms the question of continuity of the input/output map of a boundary control system to uniform boundedness of the solution to a related elliptic problem. This is shown for a class of boundary control systems with Dirichlet, Neumann, or Robin boundary control.