scispace - formally typeset
Search or ask a question

Showing papers in "Siam Journal on Control and Optimization in 2005"


Journal ArticleDOI
TL;DR: The robustness and excellent real-time performance of the method is demonstrated in a numerical experiment, the control of an unstable system, namely, an airborne kite that shall fly loops.
Abstract: An efficient Newton-type scheme for the approximate on-line solution of optimization problems as they occur in optimal feedback control is presented. The scheme allows a fast reaction to disturbances by delivering approximations of the exact optimal feedback control which are iteratively refined during the runtime of the controlled process. The contractivity of this real-time iteration scheme is proven, and a bound on the loss of optimality---compared with the theoretical optimal solution---is given. The robustness and excellent real-time performance of the method is demonstrated in a numerical experiment, the control of an unstable system, namely, an airborne kite that shall fly loops.

534 citations


Journal ArticleDOI
TL;DR: In this paper, a collection of distributed control laws that are related to nonsmooth gradient systems for disk-covering and sphere-packing problems is presented. And the resulting dynamical systems promise to be of use in coordination problems for networked robots; in this setting the distributed control law correspond to local interactions between the robots.
Abstract: This paper discusses dynamical systems for disk-covering and sphere-packing problems. We present facility location functions from geometric optimization and characterize their differentiable properties. We design and analyze a collection of distributed control laws that are related to nonsmooth gradient systems. The resulting dynamical systems promise to be of use in coordination problems for networked robots; in this setting the distributed control laws correspond to local interactions between the robots. The technical approach relies on concepts from computational geometry, nonsmooth analysis, and the dynamical system approach to algorithms.

452 citations


Journal ArticleDOI
TL;DR: The dynamical systems approach to stochastic approximation is generalized to the case where the mean differential equation is replaced by a differential inclusion, and the limit set theorem is extended to this situation.
Abstract: The dynamical systems approach to stochastic approximation is generalized to the case where the mean differential equation is replaced by a differential inclusion. The limit set theorem of Benaim and Hirsch is extended to this situation. Internally chain transitive sets and attractors are studied in detail for set-valued dynamical systems. Applications to game theory are given, in particular to Blackwell's approachability theorem and the convergence of fictitious play.

299 citations


Journal ArticleDOI
TL;DR: This work defines a linear port controlled Hamiltonian system associated with the previously defined Dirac structure and a symmetric positive operator defining the energy of the system.
Abstract: Associated with a skew-symmetric linear operator on the spatial domain $[a,b]$ we define a Dirac structure which includes the port variables on the boundary of this spatial domain. This Dirac structure is a subspace of a Hilbert space. Naturally, associated with this Dirac structure is an infinite-dimensional system. We parameterize the boundary port variables for which the \( C_{0} \)-semigroup associated with this system is contractive or unitary. Furthermore, this parameterization is used to split the boundary port variables into inputs and outputs. Similarly, we define a linear port controlled Hamiltonian system associated with the previously defined Dirac structure and a symmetric positive operator defining the energy of the system. We illustrate this theory on the example of the Timoshenko beam.

252 citations


Journal ArticleDOI
TL;DR: An algorithm based on projections on adaptive truncation sets which ensures that the aforementioned conditions required for stability are satisfied and a "deterministic" stability result which relies on simple conditions on the seauences {ξn} and {γ.
Abstract: In this paper we address the problem of the stability and convergence of the stochastic approximation procedure \[ \theta_{n+1} = \theta_n + \gamma_{n+1} [h(\theta_n)+\xi_{n+1}]. \] The stability of such sequences $\{\theta_n\}$ is known to heavily rely on the behavior of the mean field $h$ at the boundary of the parameter set and the magnitude of the stepsizes used. The conditions typically required to ensure convergence, and in particular the boundedness or stability of $\{ \theta_n \}$, are either too difficult to check in practice or not satisfied at all. This is the case even for very simple models. The most popular technique for circumventing the stability problem consists of constraining $\{ \theta_n \}$ to a compact subset ${\mathcal{K}}$ in the parameter space. This is obviously not a satisfactory solution, as the choice of ${\mathcal{K}}$ is a delicate one. In this paper we first prove a ``deterministic'' stability result, which relies on simple conditions on the sequences $\{ \xi_n \}$ and $ \{ \gamma_n \}$. We then propose and analyze an algorithm based on projections on adaptive truncation sets, which ensures that the aforementioned conditions required for stability are satisfied. We focus in particular on the case where $\{ \xi_n \}$ is a so-called Markov state-dependent noise. We establish both the stability and convergence with probability 1 (w. p. 1) of the algorithm under a set of simple and verifiable assumptions. We illustrate our results with an example related to adaptive Markov chain Monte Carlo algorithms.

229 citations


Journal ArticleDOI
TL;DR: This work considers the behavior of value-based learning agents in the multi-agent multi-armed bandit problem, and shows that such agents cannot generally play at a Nash equilibrium, although if smooth best responses are used, a Nash distribution can be reached.
Abstract: The single-agent multi-armed bandit problem can be solved by an agent that learns the values of each action using reinforcement learning. However, the multi-agent version of the problem, the iterated normal form game, presents a more complex challenge, since the rewards available to each agent depend on the strategies of the others. We consider the behavior of value-based learning agents in this situation, and show that such agents cannot generally play at a Nash equilibrium, although if smooth best responses are used, a Nash distribution can be reached. We introduce a particular value-based learning algorithm, which we call individual Q-learning, and use stochastic approximation to study the asymptotic behavior, showing that strategies will converge to Nash distribution almost surely in 2-player zero-sum games and 2-player partnership games. Player-dependent learning rates are then considered, and it is shown that this extension converges in some games for which many algorithms, including the basic algorithm initially considered, fail to converge.

169 citations


Journal ArticleDOI
TL;DR: It is shown that SS is equivalent to the spectrum of an augmented matrix lying in the open left half plane, to the existence of a solution for a certain Lyapunov equation, and implies (is equivalent for $\mathcal{S}$ finite) asymptotic wide sense stationarity (AWSS).
Abstract: Necessary and sufficient conditions for stochastic stability (SS) and mean square stability (MSS) of continuous-time linear systems subject to Markovian jumps in the parameters and additive disturbances are established. We consider two scenarios regarding the additive disturbances: one in which the system is driven by a Wiener process, and one characterized by functions in ${L_2^m}{(\Omega,{\cal F}, {\mathbb{P}})}$, which is the usual scenario for the $H_{\infty}$ approach. The Markov process is assumed to take values in an infinite countable set $\mathcal{S}$. It is shown that SS is equivalent to the spectrum of an augmented matrix lying in the open left half plane, to the existence of a solution for a certain Lyapunov equation, and implies (is equivalent for $\mathcal{S}$ finite) asymptotic wide sense stationarity (AWSS). It is also shown that SS is equivalent to the state $x(t)$ belonging to ${L_2^n}{(\Omega,{\cal F}, {\mathbb{P}})}$ whenever the disturbances are in ${L_2^m}{(\Omega,{\cal F}, {\mathbb{P}})}$. For the case in which $\mathcal{S}$ is finite, SS and MSS are equivalent, and the Lyapunov equation can be written down in two equivalent forms with each one providing an easier-to-check sufficient condition.

153 citations


Journal ArticleDOI
TL;DR: A hybrid control system and general optimal control problems are considered and necessary conditions for an optimal hybrid trajectory are provided, stating a Hybrid Necessary Principle (HNP).
Abstract: We consider a hybrid control system and general optimal control problems for this system. We suppose that the switching strategy imposes restrictions on control sets and we provide necessary conditions for an optimal hybrid trajectory, stating a hybrid necessary principle (HNP). Our result generalizes various necessary principles available in the literature.

127 citations


Journal ArticleDOI
TL;DR: Standard system-theoretic properties are developed for a class of multidimensional linear systems with evolution along a free semigroup and the connections with the much earlier studied theory of rational and recognizable formal power series are drawn.
Abstract: We introduce a class of multidimensional linear systems with evolution along a free semigroup. The transfer function for such a system is a formal power series in noncommuting indeterminates. Standard system-theoretic properties (the operations of cascade/parallel connection and inversion, controllability, observability, Kalman decomposition, state-space similarity theorem, minimal state-space realizations, Hankel operators, realization theory) are developed for this class of systems. We also draw out the connections with the much earlier studied theory of rational and recognizable formal power series. Applications include linear-fractional models for classical discrete-time systems with structured, time-varying uncertainty, dimensionless formulas in robust control, multiscale systems and automata theory, and the theory of formal languages.

117 citations


Journal ArticleDOI
TL;DR: It is demonstrated that robust stability is equivalent to the existence of a smooth Lyapunov function and that, in fact, a continuous Lyap unov function implies robust stability, and a sufficient condition for robust stability that is independent of a Lyap Unov function is presented.
Abstract: We consider stability with respect to two measures of a difference inclusion, ie, of a discrete-time dynamical system with the push-forward map being set-valued We demonstrate that robust stability is equivalent to the existence of a smooth Lyapunov function and that, in fact, a continuous Lyapunov function implies robust stability We also present a sufficient condition for robust stability that is independent of a Lyapunov function Toward this end, we develop several new results on the behavior of solutions of difference inclusions In addition, we provide a novel result for generating a smooth function from one that is merely upper semicontinuous

113 citations


Journal ArticleDOI
TL;DR: It is proved that an $(s, S)$ policy is optimal in a continuous-review stochastic inventory model with a fixed ordering cost when the demand is a mixture of a diffusion process and a compound Poisson process with exponentially distributed jump sizes.
Abstract: We prove that an $(s, S)$ policy is optimal in a continuous-review stochastic inventory model with a fixed ordering cost when the demand is a mixture of (i) a diffusion process and a compound Poisson process with exponentially distributed jump sizes, and (ii) a constant demand and a compound Poisson process. The proof uses the theory of impulse control. The Bellman equation of dynamic programming for such a problem reduces to a set of quasi-variational inequalities (QVI). An analytical study of the QVI leads to showing the existence of an optimal policy as well as the optimality of an $(s, S)$ policy. Finally, the combination of a diffusion and a general compound Poisson demand is not completely solved. We explain the difficulties and what remains open. We also provide a numerical example for the general case.

Journal ArticleDOI
TL;DR: This paper is devoted to the study of a stochastic linear-quadratic optimal control problem where the control variable is constrained in a cone, and all the coefficients of the problem are random processes.
Abstract: This paper is devoted to the study of a stochastic linear-quadratic (LQ) optimal control problem where the control variable is constrained in a cone, and all the coefficients of the problem are random processes. Employing Tanaka's formula, optimal control and optimal cost are explicitly obtained via solutions to two extended stochastic Riccati equations (ESREs). The ESREs, introduced for the first time in this paper, are highly nonlinear backward stochastic differential equations (BSDEs), whose solvability is proved based on a truncation function technique and Kobylanski's results. The general results obtained are then applied to a mean-variance portfolio selection problem for a financial market with random appreciation and volatility rates, and with short-selling prohibited. Feasibility of the problem is characterized, and efficient portfolios and efficient frontier are presented in closed forms.

Journal ArticleDOI
TL;DR: This paper establishes that an LCS satisfying the P-property has no strongly Zeno states and extends the analysis for such an LCS to a broader class of problems and provides sufficient conditions for a given state to be weakly non-Zeno.
Abstract: A linear complementarity system (LCS) is a hybrid dynamical system defined by a linear time-invariant ordinary differential equation coupled with a finite-dimensional linear complementarity problem (LCP). The present paper is the first of several papers whose goal is to study some fundamental issues associated with an LCS. Specifically, this paper addresses the issue of Zeno states and the related issue of finite number of mode switches in such a system. The cornerstone of our study is an expansion of a solution trajectory to the LCS near a given state in terms of an observability degree of the state. On the basis of this expansion and an inductive argument, we establish that an LCS satisfying the P-property has no strongly Zeno states. We next extend the analysis for such an LCS to a broader class of problems and provide sufficient conditions for a given state to be weakly non-Zeno. While related mode-switch results have been proved by Brunovsky and Sussmann for more general hybrid systems, our analysis exploits the special structure of the LCS and yields new results for the latter that are of independent interest and complement those by these two and other authors.

Journal ArticleDOI
TL;DR: It is proved that there is a sequence of generalized eigenfunctions that forms a Riesz basis in the state Hilbert space, and hence the spectrum determined growth condition holds and exponential stability of the closed-loop system can be deduced from the eigenvalue expressions.
Abstract: We study the boundary stabilization of laminated beams with structural damping which describes the slip occurring at the interface of two-layered objects. By using an invertible matrix function with an eigenvalue parameter and an asymptotic technique for the first order matrix differential equation, we find out an explicit asymptotic formula for the matrix fundamental solutions and then carry out the asymptotic analyses for the eigenpairs. Furthermore, we prove that there is a sequence of generalized eigenfunctions that forms a Riesz basis in the state Hilbert space, and hence the spectrum determined growth condition holds. Furthermore, exponential stability of the closed-loop system can be deduced from the eigenvalue expressions. In particular, the semigroup generated by the system operator is a $C_0$-group due to the fact that the three asymptotes of the spectrum are parallel to the imaginary axis.

Journal ArticleDOI
TL;DR: The results obtained show how backward stochastic differential equations can be used to obtain solutions to optimal investment and hedging problems when discontinuities in the underlying price processes are modeled by the arrivals of Poisson processes with Stochastic intensities.
Abstract: In this paper, we consider the problem of mean-variance hedging in an incomplete market where the underlying assets are jump diffusion processes which are driven by Brownian motion and doubly stochastic Poisson processes. This problem is formulated as a stochastic control problem, and closed form expressions for the optimal hedging policy are obtained using methods from stochastic control and the theory of backward stochastic differential equations. The results we have obtained show how backward stochastic differential equations can be used to obtain solutions to optimal investment and hedging problems when discontinuities in the underlying price processes are modeled by the arrivals of Poisson processes with stochastic intensities. Applications to the problem of hedging default risk are also discussed.

Journal ArticleDOI
TL;DR: This analysis proves that the discretization error is essentially linear with respect to the time step and leverages Ito calculus, Malliavin calculus, and martingale arguments to evaluate $ abla_\alpha J(\alpha)$.
Abstract: We consider a multidimensional diffusion process $(X^\alpha_t)_{0\leq t\leq T}$ whose dynamics depends on a parameter $\alpha$. Our first purpose is to write as an expectation the sensitivity $ abla_\alpha J(\alpha)$ for the expected cost $J(\alpha)=\mathbb{E}(f(X^\alpha_T))$, in order to evaluate it using Monte Carlo simulations. This issue arises, for example, from stochastic control problems (where the controller is parameterized, which reduces the control problem to a parametric optimization one) or from model misspecifications in finance. Previous evaluations of $ abla_\alpha J(\alpha)$ using simulations were limited to smooth cost functions $f$ or to diffusion coefficients not depending on $\alpha$ (see Yang and Kushner, SIAM J. Control Optim., 29 (1991), pp. 1216--1249). In this paper, we cover the general case, deriving three new approaches to evaluate $ abla_\alpha J(\alpha)$, which we call the Malliavin calculus approach, the adjoint approach, and the martingale approach. To accomplish this, we leverage Ito calculus, Malliavin calculus, and martingale arguments. In the second part of this work, we provide discretization procedures to simulate the relevant random variables; then we analyze their respective errors. This analysis proves that the discretization error is essentially linear with respect to the time step. This result, which was already known in some specific situations, appears to be true in this much wider context. Finally, we provide numerical experiments in random mechanics and finance and compare the different methods in terms of variance, complexity, computational time, and time discretization error.

Journal ArticleDOI
TL;DR: A discrete-time partially observed linear system is studied and necessary and sufficient conditions for stabilizability are established, which give the tightest lower bounds on the channel capacities for which stabilization is possible.
Abstract: The paper addresses a feedback stabilization problem involving bit-rate communication capacity constraints. A discrete-time partially observed linear system is studied. Unlike classic theory, the signals from multiple sensors are transmitted to the controller over separate finite capacity communication channels. The sensors do not have constant access to the channels, and the channels are not perfect: the messages incur time-varying transmission delays and may be corrupted or lost. However, we suppose that the time-average number of bits per sample period that can be successfully transmitted over the channel during a time interval converges to a certain limit as the length of the interval becomes large. Necessary and sufficient conditions for stabilizability are established. They give the tightest lower bounds on the channel capacities for which stabilization is possible. An algorithm for stabilization is also presented.

Journal ArticleDOI
TL;DR: It is proved that the value in randomized stopping times exists as soon as the payoff processes are right-continuous, as opposed to existing literature, which does not assume any conditions on the relations between the payoff process.
Abstract: We study two-player zero-sum stopping games in continuous time and infinite horizon. We prove that the value in randomized stopping times exists as soon as the payoff processes are right-continuous. In particular, as opposed to existing literature, we do not assume any conditions on the relations between the payoff processes.

Journal ArticleDOI
TL;DR: Given two analytic nonlinear input-output systems represented as Fliess operators, four system interconnections are considered in a unified setting and an existing notion of a composition product for formal power series has its set of known properties significantly expanded.
Abstract: Given two analytic nonlinear input-output systems represented as Fliess operators, four system interconnections are considered in a unified setting: the parallel connection, product connection, cascade connection, and feedback connection. In each case, the corresponding generating series is produced and conditions for the convergence of the corresponding Fliess operator are given. In the process, an existing notion of a composition product for formal power series has its set of known properties significantly expanded. In addition, the notion of a feedback product for formal power series is shown to be well defined in a broad context, and its basic properties are characterized.

Journal ArticleDOI
Fabian Wirth1
TL;DR: The maximal exponential growth rate may be approximated by only considering the periodic systems in the family of time-varying systems, which is possible in the generic irreducible case.
Abstract: We study families of linear time-varying systems, where time variations have to satisfy restrictions on the dwell time, that is, on the minimum distance between discontinuities, as well as on the derivative in between discontinuities. Such classes of systems may be formulated as linear flows on vector bundles. The main objective of this paper is to construct parameter-dependent Lyapunov functions, which characterize the exponential growth rate. This is possible in the generic irreducible case. As an application the Gelfand formula is generalized to the class of systems studied here. In other words, the maximal exponential growth rate may be approximated by only considering the periodic systems in the family of time-varying systems. A perspective on the question of continuous dependence of the exponential growth rate on the data is given.

Journal ArticleDOI
TL;DR: A new approach to solve optimal control problems of the monotone follower type by using the convexity of the cost functional to derive a first order characterization of optimal policies based on the Snell envelope of the objective functional's gradient at the optimum.
Abstract: We present a new approach to solve optimal control problems of the monotone follower type. The key feature of our approach is that it allows us to include an arbitrary dynamic fuel constraint. Instead of dynamic programming, we use the convexity of our cost functional to derive a first order characterization of optimal policies based on the Snell envelope of the objective functional's gradient at the optimum. The optimal control policy is constructed explicitly in terms of the solution to a representation theorem for stochastic processes obtained in Bank and El Karoui (2004), {Ann. Probab.}, 32, pp. 1030--1067. As an illustration, we show how our methodology allows us to extend the scope of the explicit solutions obtained for the classical monotone follower problem and for an irreversible investment problem arising in economics.

Journal ArticleDOI
TL;DR: A model of hybrid control system in which both discrete and continuous controls are involved is investigated, and a quasi-variational inequality satisfied by V in the viscosity sense is derived.
Abstract: We investigate a model of hybrid control system in which both discrete and continuous controls are involved. In this general model, discrete controls act on the system at a given set interface. The state of the system is changed discontinuously when the trajectory hits predefined sets, namely, an autonomous jump set A or a controlled jump set C where the controller can choose to jump or not. At each jump, the trajectory can move to a different Euclidean space. We prove the continuity of the associated value function V with respect to the initial point. Using the dynamic programming principle satisfied by V, we derive a quasi-variational inequality satisfied by V in the viscosity sense. We characterize the value function V as the unique viscosity solution of the quasi-variational inequality by the comparison principle method.

Journal ArticleDOI
TL;DR: The paper extends previous work in two ways: (1) it deals with two coupled partial differential equations, and (2) under certain circumstances handles equations defined on a semi-infinite domain.
Abstract: In this paper, we continue the development of state feedback boundary control laws based on the backstepping methodology, for the stabilization of unstable, parabolic partial differential equations. We consider the linearized Ginzburg--Landau equation, which models, for instance, vortex shedding in bluff body flows. Asymptotic stabilization is achieved by means of boundary control via state feedback in the form of an integral operator. The kernel of the operator is shown to be twice continuously differentiable, and a series approximation for its solution is given. Under certain conditions on the parameters of the Ginzburg--Landau equation, compatible with vortex shedding modelling on a semi-infinite domain, the kernel is shown to have compact support, resulting in partial state feedback. Simulations are provided in order to demonstrate the performance of the controller. In summary, the paper extends previous work in two ways: (1) it deals with two coupled partial differential equations, and (2) under certain circumstances handles equations defined on a semi-infinite domain.

Journal ArticleDOI
TL;DR: It is shown that there exists a generalized hold function such that unity sampled-data feedback renders the closed-loop system exponentially stable as well as L 2 -stable (in the input-output sense).
Abstract: We consider well-posed linear infinite-dimensional systems, the outputs of which are sampled in a generalized sense using a suitable weighting function. Under certain natural assump- tions on the system, the weighting function, and the sampling period, we show that there exists a generalized hold function such that unity sampled-data feedback renders the closed-loop system exponentially stable (in the state-space sense) as well as L 2 -stable (in the input-output sense). To illustrate our main result, we describe an application to a structurally damped Euler-Bernoulli beam.

Journal ArticleDOI
TL;DR: An a priori error analysis for the finite element Galerkin discretization of parameter identification problems with finite number of unknown parameters is developed.
Abstract: We develop an a priori error analysis for the finite element Galerkin discretization of parameter identification problems. The state equation is given by an elliptic partial differential equation of second order with a finite number of unknown parameters, which are estimated using pointwise measurements of the state variable.

Journal ArticleDOI
TL;DR: It is proved that the origin of all globally asymptotically controllable systems can be globally asylptotic stabilized via a hybrid feedback with robustness with respect to measurement noise, actuator errors, and external disturbances.
Abstract: This paper deals with asymptotically controllable systems for which there exists no smooth stabilizing state feedback. To investigate the robustness asymptotic stabilization property, a new class of hybrid feedbacks (with a continuous component and a discrete one) is introduced: the hybrid patchy feedbacks. The notion of solutions is a generalization of $\pi$-solutions and Euler solutions. It is proved that the origin of all globally asymptotically controllable systems can be globally asymptotically stabilized via a hybrid feedback with robustness with respect to measurement noise, actuator errors, and external disturbances.

Journal ArticleDOI
TL;DR: A multidimensional controlled wave equation on a bounded domain, subject to partial Dirichlet control and colocated observation is analyzed by means of a partial Fourier transform and the corresponding feedthrough operator is found to be the identity operator on the input space.
Abstract: In this paper we analyze a multidimensional controlled wave equation on a bounded domain, subject to partial Dirichlet control and colocated observation. By means of a partial Fourier transform, it is shown that the system is well-posed and regular in the sense of D. Salamon and G. Weiss. The corresponding feedthrough operator is found to be the identity operator on the input space.

Journal ArticleDOI
TL;DR: A weakened maximum principle is obtained for problems with data measurable in the time variable, being the time transversality conditions deduced with the help of some extra convexity assumption on the state constraints.
Abstract: In this article, a free-time impulsive control problem with state constraints and equality and inequality constraints on the trajectory endpoints is considered. A weakened maximum principle is obtained for problems with data measurable in the time variable, being the time transversality conditions deduced with the help of some extra convexity assumption on the state constraints. In the case of smooth problems a nondegenerate maximum principle is derived by using a penalty function method.

Journal ArticleDOI
TL;DR: This work studies the calibration of volatility from the observations of the prices of an American option and gives necessary optimality conditions involving an adjoint state for a simplified inverse problem and the differentiability of the cost function.
Abstract: In finance, the price of an American option is obtained from the price of the underlying asset by solving a parabolic variational inequality. The free boundary associated with this variational inequality can be interpreted as the price for which the option should be exercised. The calibration of volatility from the observations of the prices of an American option yields an inverse problem for the previously mentioned parabolic variational inequality. After studying the variational inequality and the exercise price, we give results concerning the sensitivity of the option price and of the exercise price with respect to the variations of the volatility. The inverse problem is addressed by a least square method, with suitable regularization terms. We give necessary optimality conditions involving an adjoint state for a simplified inverse problem and we study the differentiability of the cost function. Optimality conditions are also given for the genuine calibration problem.

Journal ArticleDOI
TL;DR: This work considers the Mayer optimal control problem with dynamics given by a nonconvex differential inclusion, whose trajectories are constrained to a closed set and obtains necessary optimality conditions in the form of the maximum principle together with a relation between the costate and the value function.
Abstract: We consider the Mayer optimal control problem with dynamics given by a nonconvex differential inclusion, whose trajectories are constrained to a closed set and obtain necessary optimality conditions in the form of the maximum principle together with a relation between the costate and the value function. This additional relation is applied in turn to show that the maximum principle is nondegenerate. We also provide a sufficient condition for the normality of the maximum\break principle. To derive these results we use convex linearizations of differential inclusions and convex linearizations of constraints along optimal trajectories. Then duality theory of convex analysis is applied to derive necessary conditions for optimality. In this way we extend the known relations between the maximum principle and dynamic programming from the unconstrained problems to the constrained case.