scispace - formally typeset
Search or ask a question

Showing papers in "Siam Journal on Control and Optimization in 2008"


Journal ArticleDOI
TL;DR: A new global asymptotic stabilization result by output feedback for feedback and feedforward systems is proposed by combining a new recursive observer design procedure for a chain of integrator.
Abstract: We introduce two new tools that can be useful in nonlinear observer and output feedback design. The first one is a simple extension of the notion of homogeneous approximation to make it valid both at the origin and at infinity (homogeneity in the bi-limit). Exploiting this extension, we give several results concerning stability and robustness for a homogeneous in the bi-limit vector field. The second tool is a new recursive observer design procedure for a chain of integrator. Combining these two tools, we propose a new global asymptotic stabilization result by output feedback for feedback and feedforward systems.

599 citations


Journal ArticleDOI
TL;DR: Graph-theoretic conditions are obtained which address the convergence question for the leaderless version of the widely studied Vicsek consensus problem.
Abstract: This paper presents new graph-theoretic results appropriate for the analysis of a variety of consensus problems cast in dynamically changing environments. The concepts of rooted, strongly rooted, and neighbor-shared are defined, and conditions are derived for compositions of sequences of directed graphs to be of these types. The graph of a stochastic matrix is defined, and it is shown that under certain conditions the graph of a Sarymsakov matrix and a rooted graph are one and the same. As an illustration of the use of the concepts developed in this paper, graph-theoretic conditions are obtained which address the convergence question for the leaderless version of the widely studied Vicsek consensus problem.

586 citations


Journal ArticleDOI
TL;DR: In a sequential Bayesian ranking and selection problem with independent normal populations and common known variance, a previously introduced measurement policy is studied, showing that the knowledge-gradient policy is optimal both when the horizon is a single time period and in the limit as the horizon extends to infinity.
Abstract: In a sequential Bayesian ranking and selection problem with independent normal populations and common known variance, we study a previously introduced measurement policy which we refer to as the knowledge-gradient policy. This policy myopically maximizes the expected increment in the value of information in each time period, where the value is measured according to the terminal utility function. We show that the knowledge-gradient policy is optimal both when the horizon is a single time period and in the limit as the horizon extends to infinity. We show furthermore that, in some special cases, the knowledge-gradient policy is optimal regardless of the length of any given fixed total sampling horizon. We bound the knowledge-gradient policy's suboptimality in the remaining cases, and show through simulations that it performs competitively with or significantly better than other policies.

440 citations


Journal ArticleDOI
TL;DR: Recently established properties of compositions of directed graphs together with results from the theory of nonhomogeneous Markov chains are used to derive worst case convergence rates for the headings of a group of mobile autonomous agents which arise in connection with the widely studied Vicsek consensus problem.
Abstract: This paper uses recently established properties of compositions of directed graphs together with results from the theory of nonhomogeneous Markov chains to derive worst case convergence rates for the headings of a group of mobile autonomous agents which arise in connection with the widely studied Vicsek consensus problem. The paper also uses graph-theoretic constructions to solve modified versions of the Vicsek problem in which there are measurement delays, asynchronous events, or a group leader. In all three cases the conditions under which consensus is achieved prove to be almost the same as the conditions under which consensus is achieved in the synchronous, delay-free, leaderless case.

332 citations


Journal ArticleDOI
TL;DR: In this article, a hierarchy of LMI-relaxations whose optimal values form a non-decreasing sequence of lower bounds on the optimal value of the OCP is provided.
Abstract: We consider the class of nonlinear optimal control problems (OCPs) with polynomial data, i.e., the differential equation, state and control constraints, and cost are all described by polynomials, and more generally for OCPs with smooth data. In addition, state constraints as well as state and/or action constraints are allowed. We provide a simple hierarchy of LMI- (linear matrix inequality)-relaxations whose optimal values form a nondecreasing sequence of lower bounds on the optimal value. Under some convexity assumptions, the sequence converges to the optimal value of the OCP. Preliminary results show that good approximations are obtained with few moments.

289 citations


Journal ArticleDOI
TL;DR: This work gives a new sufficient condition on the boundary conditions for the exponential stability of one-dimensional nonlinear hyperbolic systems on a bounded interval using an explicit strict Lyapunov function.
Abstract: We give a new sufficient condition on the boundary conditions for the exponential stability of one-dimensional nonlinear hyperbolic systems on a bounded interval. Our proof relies on the construction of an explicit strict Lyapunov function. We compare our sufficient condition with other known sufficient conditions for nonlinear and linear one-dimensional hyperbolic systems.

273 citations


Journal ArticleDOI
TL;DR: P Peng's BSDE method is extended from the framework of stochastic control theory into that of Stochastic differential games and is shown to prove a dynamic programming principle for both the upper and the lower value functions of the game in a straightforward way.
Abstract: In this paper we study zero-sum two-player stochastic differential games with the help of the theory of backward stochastic differential equations (BSDEs). More precisely, we generalize the results of the pioneering work of Fleming and Souganidis [Indiana Univ. Math. J., 38 (1989), pp. 293-314] by considering cost functionals defined by controlled BSDEs and by allowing the admissible control processes to depend on events occurring before the beginning of the game. This extension of the class of admissible control processes has the consequence that the cost functionals become random variables. However, by making use of a Girsanov transformation argument, which is new in this context, we prove that the upper and the lower value functions of the game remain deterministic. Apart from the fact that this extension of the class of admissible control processes is quite natural and reflects the behavior of the players who always use the maximum of available information, its combination with BSDE methods, in particular that of the notion of stochastic “backward semigroups" introduced by Peng [BSDE and stochastic optimizations, in Topics in Stochastic Analysis, Science Press, Beijing, 1997], allows us then to prove a dynamic programming principle for both the upper and the lower value functions of the game in a straightforward way. The upper and the lower value functions are then shown to be the unique viscosity solutions of the upper and the lower Hamilton-Jacobi-Bellman-Isaacs equations, respectively. For this Peng's BSDE method is extended from the framework of stochastic control theory into that of stochastic differential games.

268 citations


Journal ArticleDOI
TL;DR: Under classical conditions, it is proved the strong convergence of the sequences of iterates given by the considered scheme.
Abstract: This paper deals with an iterative method, in a real Hilbert space, for approximating a common element of the set of fixed points of a demicontractive operator (possibly quasi-nonexpansive or strictly pseudocontractive) and the set of solutions of a variational inequality problem for a monotone, Lipschitz continuous mapping. The considered algorithm can be regarded as a combination of a variation of the hybrid steepest descent method and the so-called extragradient method. Under classical conditions, we prove the strong convergence of the sequences of iterates given by the considered scheme.

235 citations


Journal ArticleDOI
TL;DR: The proof is based on the choice of suitable weighted functions and Hardy-type inequalities and deduce null controllability results for the degenerate one-dimensional heat equation with the same boundary conditions as above.
Abstract: Given $\alpha \in [0,2)$ and $f \in L^2 ((0,T)\times(0,1))$, we derive new Carleman estimates for the degenerate parabolic problem $w_t + (x^\alpha w_x) _x =f$, where $(t,x) \in (0,T) \times (0,1)$, associated to the boundary conditions $w(t,1)=0$ and $w(t,0)=0$ if $0 \leq \alpha <1$ or $(x^\alpha w_x)(t,0)=0$ if $1\leq \alpha <2$ The proof is based on the choice of suitable weighted functions and Hardy-type inequalities As a consequence, for all $0 \leq \alpha <2$ and $\omega\subset\subset(0,1)$, we deduce null controllability results for the degenerate one-dimensional heat equation $u_t - (x^\alpha u_x)_x = h \chi _\omega$ with the same boundary conditions as above

211 citations


Journal ArticleDOI
TL;DR: A priori error analysis for Galerkin finite element discretizations of optimal control problems governed by linear parabolic equations and error estimates of optimal order with respect to both space and time discretization parameters are developed.
Abstract: In this paper we develop a priori error analysis for Galerkin finite element discretizations of optimal control problems governed by linear parabolic equations. The space discretization of the state variable is done using usual conforming finite elements, whereas the time discretization is based on discontinuous Galerkin methods. For different types of control discretizations we provide error estimates of optimal order with respect to both space and time discretization parameters. The paper is divided into two parts. In the first part we develop some stability and error estimates for space-time discretization of the state equation and provide error estimates for optimal control problems without control constraints. In the second part of the paper, the techniques and results of the first part are used to develop a priori error analysis for optimal control problems with pointwise inequality constraints on the control variable.

207 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the Carnot-Caratheodory distance in sub-Riemannian geometry and showed that the distance can be expressed as the inverse of an elementary function.
Abstract: In this paper we study the Carnot-Caratheodory metrics on $SU(2)\simeq S^3$, $SO(3)$, and $SL(2)$ induced by their Cartan decomposition and by the Killing form Besides computing explicitly geodesics and conjugate loci, we compute the cut loci (globally), and we give the expression of the Carnot-Caratheodory distance as the inverse of an elementary function We then prove that the metric given on $SU(2)$ projects on the so-called lens spaces $L(p,q)$ Also for lens spaces, we compute the cut loci (globally) For $SU(2)$ the cut locus is a maximal circle without one point In all other cases the cut locus is a stratified set To our knowledge, this is the first explicit computation of the whole cut locus in sub-Riemannian geometry, except for the trivial case of the Heisenberg group

Journal ArticleDOI
TL;DR: A posteriori error estimates for finite element discretization of elliptic optimization problems with pointwise inequality constraints on the control variable are developed and guide an adaptive mesh refinement algorithm allowing for substantial saving in degrees of freedom.
Abstract: In this paper we develop a posteriori error estimates for finite element discretization of elliptic optimization problems with pointwise inequality constraints on the control variable. We derive error estimators for assessing the discretization error with respect to the cost functional as well as with respect to a given quantity of interest. These error estimators provide quantitative information about the discretization error and guide an adaptive mesh refinement algorithm allowing for substantial saving in degrees of freedom. The behavior of the method is demonstrated on numerical examples.

Journal ArticleDOI
TL;DR: A max-plus analogue of the Petrov-Galerkin finite element method to solve finite horizon deterministic optimal control problems and derives a convergence result, in arbitrary dimension, showing that for a class of problems, the error estimate is of order $\delta+\Delta x(\delta)^{-1}$ or $\sqrt{\delta}+\ Delta x(\ delta)^-1$, depending on the choice of the approximation.
Abstract: We introduce a max-plus analogue of the Petrov-Galerkin finite element method to solve finite horizon deterministic optimal control problems. The method relies on a max-plus variational formulation. We show that the error in the sup-norm can be bounded from the difference between the value function and its projections on max-plus and min-plus semimodules when the max-plus analogue of the stiffness matrix is exactly known. In general, the stiffness matrix must be approximated: this requires approximating the operation of the Lax-Oleinik semigroup on finite elements. We consider two approximations relying on the Hamiltonian. We derive a convergence result, in arbitrary dimension, showing that for a class of problems, the error estimate is of order $\delta+\Delta x(\delta)^{-1}$ or $\sqrt{\delta}+\Delta x(\delta)^{-1}$, depending on the choice of the approximation, where $\delta$ and $\Delta x$ are, respectively, the time and space discretization steps. We compare our method with another max-plus based discretization method previously introduced by Fleming and McEneaney. We give numerical examples in dimensions 1 and 2.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a variant of the Monge-Kantorovich problem, taking into account congestion, and prove the existence and the variational characterization of equilibria in a continuous space setting.
Abstract: In the classical Monge-Kantorovich problem, the transportation cost depends only on the amount of mass sent from sources to destinations and not on the paths followed by this mass. Thus, it does not allow for congestion effects. Using the notion of traffic intensity, we propose a variant, taking into account congestion. This variant is a continuous version of a well-known traffic problem on networks that is studied both in economics and in operational research. The interest of this problem is in its relations with traffic equilibria of Wardrop type. What we prove in the paper is exactly the existence and the variational characterization of equilibria in a continuous space setting.

Journal ArticleDOI
TL;DR: The coordinating control law stabilizes the unstable dynamics with a term derived from the method of controlled Lagrangians and synchronizes the dynamics across the network with potential shaping designed to couple the mechanical systems.
Abstract: In this paper we address stabilization of a network of underactuated mechanical systems with unstable dynamics. The coordinating control law stabilizes the unstable dynamics with a term derived from the method of controlled Lagrangians and synchronizes the dynamics across the network with potential shaping designed to couple the mechanical systems. The coupled system is Lagrangian with symmetry, and energy methods are used to prove stability and coordinated behavior. Two cases of asymptotic stabilization are discussed; one yields convergence to synchronized motion staying on a constant momentum surface, and the other yields convergence to a relative equilibrium. We illustrate the results in the case of synchronization of $n$ carts, each balancing an inverted pendulum.

Journal ArticleDOI
TL;DR: Rather than penalize the material density once the optimal composite shape is obtained, macroscopically project the microstructure of the former through an appropriate procedure that roughly consists in laying the material along the directions of lamination of the composite.
Abstract: We propose an alternative to the classical post-treatment of the homogenization method for shape optimization. Rather than penalize the material density once the optimal composite shape is obtained (by the homogenization method) in order to produce a workable shape close to the optimal one, we macroscopically project the microstructure of the former through an appropriate procedure that roughly consists in laying the material along the directions of lamination of the composite. We have tested our approach in the framework of compliance minimization in two-dimensional elasticity. Numerical results are provided.

Journal ArticleDOI
TL;DR: This note extends the Brezis-Ekeland principle to doubly nonlinear evolution equations driven by convex potentials in order to establish approximation results for gradient flows, doublyNonlinear equations, and rate-independent evolutions.
Abstract: The celebrated Brezis-Ekeland principle [C. R. Acad. Sci. Paris Ser. A-B, 282 (1976), pp. Ai, A1197-A1198, Aii, and A971-A974] characterizes trajectories of nonautonomous gradient flows of convex functionals as solutions to suitable minimization problems. This note extends this characterization to doubly nonlinear evolution equations driven by convex potentials. The characterization is exploited in order to establish approximation results for gradient flows, doubly nonlinear equations, and rate-independent evolutions.

Journal ArticleDOI
TL;DR: A bang-bang principle is obtained for the time optimal control of the heat equation with controls taken from the set U for almost all $t\in [0, T^*]$, where $\partial U$ denotes the boundary of the set $U$ and $T^*$ is the optimal time.
Abstract: In this paper, we establish a certain $L^\infty$-null controllability for the internally controlled heat equation in $\Omega\times [0,T]$, with the control restricted to a product set of an open nonempty subset in $\Omega$ and a subset of positive measure in the interval $[0,T]$. Based on this, we obtain a bang-bang principle for the time optimal control of the heat equation with controls taken from the set $\mathcal{U}_{ad} =\{u(\cdot, t): [0, \infty){\rightarrow} L^2(\Omega)$ measurable; $u(\cdot, t)\in U,$ a.e. in $t\}$, where $U$ is a closed and bounded subset of $L^2(\Omega)$. Namely, each optimal control $u^*(\cdot, t)$ of the problem satisfies necessarily the bang-bang property: $u^*(\cdot, t)\in \partial U$ for almost all $t\in [0, T^*]$, where $\partial U$ denotes the boundary of the set $U$ and $T^*$ is the optimal time. We also get the uniqueness of the optimal control when the target set $S$ is convex and the control set $U$ is a closed ball.

Journal ArticleDOI
TL;DR: In this article, the second part of a priori error analysis for finite element discretizations of parabolic optimal control problems is presented, where the authors consider the problem of discretization of finite element control problems.
Abstract: This paper is the second part of our work on a priori error analysis for finite element discretizations of parabolic optimal control problems. In the first part [SIAM J. Control Optim., 47 (2008), ...

Journal ArticleDOI
TL;DR: This paper designs a stabilizing output-feedback controller in a noncollocated setting, with measurements at the free end (tip) of the beam and actuation at the beam base, with a novel combination of the classical “damping boundary feedback” idea with a recently developed backstepping approach.
Abstract: We consider a model of the undamped shear beam with a destabilizing boundary condition. The motivation for this model comes from atomic force microscopy, where the tip of the cantilever beam is destabilized by van der Waals forces acting between the tip and the material surface. Previous research efforts relied on collocated actuation and sensing at the tip, exploiting the passivity property between the corresponding input and output in the beam model. In this paper we design a stabilizing output-feedback controller in a noncollocated setting, with measurements at the free end (tip) of the beam and actuation at the beam base. Our control design is a novel combination of the classical “damping boundary feedback” idea with a recently developed backstepping approach. A change of variables is constructed which converts the beam model into a wave equation (for a very short string) with boundary damping. This approach is physically intuitive and allows both an elegant stability analysis and an easy selection of design parameters for achieving desired performance. Our observer design is a dual of the similar ideas, combining the damping feedback with backstepping, adapted to the observer error system. Both stability and well-posedness of the closed-loop system are proved. The simulation results are presented.

Journal ArticleDOI
TL;DR: An explicit parametrization of a finite-dimensional subset of the cone of Lyapunov functions is given, enforced using sum-of-squares polynomial matrices, which allows the computation to be formulated as a semidefinite program.
Abstract: We consider the problem of constructing Lyapunov functions for linear differential equations with delays. For such systems it is known that exponential stability implies the existence of a positive Lyapunov function which is quadratic on the space of continuous functions. We give an explicit parameterization of a sequence of finite-dimensional subsets of the cone of positive Lyapunov functions using positive semidefinite matrices. This allows stability analysis of linear time-delay systems to be formulated as a semidefinite program.

Journal ArticleDOI
TL;DR: From the tangential condition characterizing capture basins, it is proved that this solution is the unique “upper semicontinuous” solution to the Hamilton-Jacobi-Bellman partial differential equation in the Barron-Jensen/Frankowska sense.
Abstract: We use viability techniques for solving Dirichlet problems with inequality constraints (obstacles) for a class of Hamilton-Jacobi equations. The hypograph of the “solution” is defined as the “capture basin” under an auxiliary control system of a target associated with the initial and boundary conditions, viable in an environment associated with the inequality constraint. From the tangential condition characterizing capture basins, we prove that this solution is the unique “upper semicontinuous” solution to the Hamilton-Jacobi-Bellman partial differential equation in the Barron-Jensen/Frankowska sense. We show how this framework allows us to translate properties of capture basins into corresponding properties of the solutions to this problem. For instance, this approach provides a representation formula of the solution which boils down to the Lax-Hopf formula in the absence of constraints.

Journal ArticleDOI
TL;DR: In this article, Costa et al. established equivalence results on stability, recurrence, and ergodicity between a piecewise deterministic Markov process (PDMP) and an embedded discrete-time Markov chain generated by a Markov kernel.
Abstract: The main goal of this paper is to establish some equivalence results on stability, recurrence, and ergodicity between a piecewise deterministic Markov process (PDMP) $\{X(t)\}$ and an embedded discrete-time Markov chain $\{\Theta_{n}\}$ generated by a Markov kernel $G$ that can be explicitly characterized in terms of the three local characteristics of the PDMP, leading to tractable criterion results. First we establish some important results characterizing $\{\Theta_{n}\}$ as a sampling of the PDMP $\{X(t)\}$ and deriving a connection between the probability of the first return time to a set for the discrete-time Markov chains generated by $G$ and the resolvent kernel $R$ of the PDMP. From these results we obtain equivalence results regarding irreducibility, existence of $\sigma$-finite invariant measures, and (positive) recurrence and (positive) Harris recurrence between $\{X(t)\}$ and $\{\Theta_{n}\}$, generalizing the results of [F. Dufour and O. L. V. Costa, SIAM J. Control Optim., 37 (1999), pp. 1483-1502] in several directions. Sufficient conditions in terms of a modified Foster-Lyapunov criterion are also presented to ensure positive Harris recurrence and ergodicity of the PDMP. We illustrate the use of these conditions by showing the ergodicity of a capacity expansion model.

Journal ArticleDOI
TL;DR: The upper value and the lower value of the game are defined by $V^*(x) = \inf_\sigma \sup_\tau \mathsf{M}_x(\tau_*, \sigma),$ respectively, where the horizon $T$ (the upper bound for $\tau$ and $\sigma$ above) may be either finite or infinite.
Abstract: Let $X=(X_t)_{t \ge 0}$ be a strong Markov process, and let $G_1,\, G_2$, and $G_3$ be continuous functions satisfying $G_1 \le G_3 \le G_2$ and $\mathsf{E}_x\sup_t \vert G_i(X_t) \vert < \infty$ for $i=1,2,3$. Consider the optimal stopping game where the sup-player chooses a stopping time $\tau$ to maximize, and the inf-player chooses a stopping time $\sigma$ to minimize, the expected payoff $\mathsf{M}_x(\tau,\sigma) = \mathsf{E}_x [G_1(X_\tau)\;\! I(\tau\! <\! \sigma) + G_2(X_\sigma)\;\! I(\sigma\! <\! \tau) + G_3(X_\tau)\;\! I(\tau\! =\! \sigma)],$ where $X_0=x$ under $\mathsf{P}_{\!x}$. Define the upper value and the lower value of the game by $V^*(x) = \inf_\sigma \sup_\tau \mathsf{M}_x(\tau,\sigma)~{\rm and}~ V_*(x) = \sup_\tau \inf_\sigma \mathsf{M}_x(\tau,\sigma),$ respectively, where the horizon $T$ (the upper bound for $\tau$ and $\sigma$ above) may be either finite or infinite (it is assumed that $G_1(X_T)=G_2(X_T)$ if $T$ is finite and $\liminf_{t \rightarrow \infty} G_2(X_t) \le \limsup_{t \rightarrow \infty} G_1(X_t)$ if $T$ is infinite). If $X$ is right-continuous, then the Stackelberg equilibrium holds, in the sense that $V^*(x)=V_*(x)$ for all $x$ with $V:=V^*=V_*$ defining a measurable function. If $X$ is right-continuous and left-continuous over stopping times (quasi-left-continuous), then the Nash equilibrium holds, in the sense that there exist stopping times $\tau_*$ and $\sigma_*$ such that $\mathsf{M}_x(\tau,\sigma_*) \le \mathsf{M}_x(\tau_*,\sigma_*) \le \mathsf{M}_x(\tau_*,\sigma)$ for all stopping times $\tau$ and $\sigma$, implying also that $V(x)=\mathsf{M}_x(\tau_*,\sigma_*)$ for all $x$. Further properties of the value function $V$ and the optimal stopping times $\tau_*$ and $\sigma_*$ are exhibited in the proof.

Journal ArticleDOI
TL;DR: In this article, the authors consider constrained finite-time optimal control problems for discrete-time linear time-invariant systems with constraints on inputs and outputs based on linear and quadratic performance indices.
Abstract: We consider constrained finite-time optimal control problems for discrete-time linear time-invariant systems with constraints on inputs and outputs based on linear and quadratic performance indices. The solution to such problems is a time-varying piecewise affine (PWA) state-feedback law and can be computed by means of multiparametric programming. By exploiting the properties of the value function and the piecewise affine optimal control law of the constrained finite-time optimal control (CFTOC), we propose two new algorithms that avoid storing the polyhedral regions. The new algorithms significantly reduce the on-line storage demands and computational complexity during evaluation of the PWA feedback control law resulting from the CFTOC.

Journal ArticleDOI
TL;DR: Piecewise Lyapunov-Razumikhin functions are introduced for the switching candidate systems to investigate the stability in the presence of an infinite number of switchings, providing sufficient conditions in terms of the minimum dwell time to guarantee asymptotic stability.
Abstract: This paper addresses the asymptotic stability of switched time delay systems with heterogeneous time invariant time delays. Piecewise Lyapunov-Razumikhin functions are introduced for the switching candidate systems to investigate the stability in the presence of an infinite number of switchings. We provide sufficient conditions in terms of the minimum dwell time to guarantee asymptotic stability under the assumptions that each switching candidate is delay-independently or delay-dependently stable. Conservatism analysis is also provided by comparing with the dwell time conditions for switched delay-free systems. Finally, a numerical example is given to validate the results.

Journal ArticleDOI
TL;DR: In this paper, a non-smooth mathematical programming technique is used to compute locally optimal $H_2/H_\infty$-controllers, which may have a predefined structure.
Abstract: We present a new approach to mixed $H_2/H_\infty$ output feedback control synthesis. Our method uses nonsmooth mathematical programming techniques to compute locally optimal $H_2/H_\infty$-controllers, which may have a predefined structure. We prove global convergence of our method and present tests to validate it numerically.

Journal ArticleDOI
TL;DR: Given a control-affine system and a controlled invariant submanifold, it is presented necessary and sufficient conditions for local feedback equivalence to a system whose dynamics transversal to the sub manifold are linear and controllable.
Abstract: Given a control-affine system and a controlled invariant submanifold, we present necessary and sufficient conditions for local feedback equivalence to a system whose dynamics transversal to the submanifold are linear and controllable. A key ingredient used in the analysis is the new notion of transverse controllability indices of a control system with respect to a set.

Journal ArticleDOI
TL;DR: A representation of solutions with the aid of a discrete matrix delayed exponential is used and except for a criterion of relative controllability, a control function is constructed as well.
Abstract: The purpose of this contribution is to develop a controllability method for linear discrete systems with constant coefficients and with pure delay. To do this, a representation of solutions with the aid of a discrete matrix delayed exponential is used. Such an approach leads to new conditions of controllability. Except for a criterion of relative controllability, a control function is constructed as well.

Journal ArticleDOI
TL;DR: In this article, the authors consider a stochastic control problem which is a natural extension of the Monge-Kantorovich problem and provide a probabilistic proof of two fundamental results in mass transportation: the Kantorovich duality and the graph property.
Abstract: We address an optimal mass transportation problem by means of optimal stochastic control. We consider a stochastic control problem which is a natural extension of the Monge-Kantorovich problem. Using a vanishing viscosity argument we provide a probabilistic proof of two fundamental results in mass transportation: the Kantorovich duality and the graph property for the support of an optimal measure for the Monge-Kantorovich problem. Our key tool is a stochastic duality result involving solutions of the Hamilton-Jacobi-Bellman PDE.