scispace - formally typeset
Search or ask a question

Showing papers in "Siam Journal on Control and Optimization in 1998"


Journal ArticleDOI
TL;DR: In this article, a bounded real lemma for stochastic linear systems with deterministic and stochastically perturbations is presented. Butler et al. showed that the relation between stability and stability radii is tight in some special cases.
Abstract: We consider stochastic linear plants which are controlled by dynamic output feedback and subjected to both deterministic and stochastic perturbations. Our objective is to develop an $H^{\infty}$-type theory for such systems. We prove a bounded real lemma for stochastic systems with deterministic and stochastic perturbations. This enables us to obtain necessary and sufficient conditions for the existence of a stabilizing compensator which keeps the effect of the perturbations on the to-be-controlled output below a given threshhold $\gamma > 0$. In the deterministic case, the analogous conditions involve two uncoupled linear matrix inequalities, but in the stochastic setting we obtain coupled nonlinear matrix inequalities instead. The connection between $H^{\infty}$ theory and stability radii is discussed and leads to a lower bound for the radii, which is shown to be tight in some special cases.

373 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a theory for linear time-invariant differential systems and quadratic functionals based on one-variable polynomial matrices and two-variable matrices.
Abstract: This paper develops a theory around the notion of quadratic differential forms in the context of linear differential systems. In many applications, we need to not only understand the behavior of the system variables but also the behavior of certain functionals of these variables. The obvious cases where such functionals are important are in Lyapunov theory and in LQ and $H_{\infty}$ optimal control. With some exceptions, these theories have almost invariably concentrated on first order models and state representations. In this paper, we develop a theory for linear time-invariant differential systems and quadratic functionals. We argue that in the context of systems described by one-variable polynomial matrices, the appropriate tool to express quadratic functionals of the system variables are two-variable polynomial matrices. The main achievement of this paper is a description of the interaction of one- and two-variable polynomial matrices for the analysis of functionals and for the application of higher order Lyapunov functionals.

296 citations


Journal ArticleDOI
TL;DR: In this article, the optimal control of stochastic linear quadratic regulators (LQRs) is studied under the assumption that the control weight costs must be positive definite.
Abstract: This paper considers optimal (minimizing) control of stochastic linear quadratic regulators (LQRs). The assumption that the control weight costs must be positive definite, inherited from the determ...

293 citations


Journal ArticleDOI
TL;DR: In this article, the asymptotic behavior of a distributed, asynchronous stochastic approximation scheme is analyzed in terms of a limiting nonautonomous differential equation and the relation between the latter and the relative values of suitably rescaled relative frequencies of updates of different components is underscored.
Abstract: The asymptotic behavior of a distributed, asynchronous stochastic approximation scheme is analyzed in terms of a limiting nonautonomous differential equation. The relation between the latter and the relative values of suitably rescaled relative frequencies of updates of different components is underscored.

182 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the longitudinal and transversal vibrations of the Euler-Bernoulli beam with Kelvin-Voigt damping distributed locally on any subinterval of the region occupied by the beam.
Abstract: In this paper, we consider the longitudinal and transversal vibrations of the Euler--Bernoulli beam with Kelvin--Voigt damping distributed locally on any subinterval of the region occupied by the beam. We prove that the semigroup associated with the equation for the transversal motion of the beam is exponentially stable, although the semigroup associated with the equation for the longitudinal motion of the beam is not exponentially stable. Due to the locally distributed and unbounded nature of the damping, we use a frequency domain method and combine a contradiction argument with the multiplier technique to carry out a special analysis for the resolvent. We also show that the associated semigroups are not analytic.

182 citations


Journal ArticleDOI
TL;DR: In this paper, the stability of a flexible beam that is clamped at one end and free at the other was studied and it was shown that the closed-loop system is wellposed and is exponentially stable.
Abstract: We study the stability of a flexible beam that is clamped at one end and free at the other; a mass is also attached to the free end of the beam. To stabilize this system we apply a boundary control force at the free end of the beam. We prove that the closed-loop system is well-posed and is exponentially stable. We then analyze the spectrum of the system for a special case and prove that the spectrum determines the exponential decay rate for the considered case.

173 citations


Journal ArticleDOI
TL;DR: In this paper, a general maximum principle is proved for the partially observed optimal control of possibly degenerate stochastic differential equations, with correlated noises between the system and the observation.
Abstract: This paper concerns partially observed optimal control of possibly degenerate stochastic differential equations, with correlated noises between the system and the observation The control is allowed to enter into all the coefficients A general maximum principle is proved for the partially observed optimal control, and the relations among the adjoint processes are established Adjoint vector fields, which are adapted to the past and present observations, are introduced as the solutions to some backward stochastic partial differential equations (BSPDEs), and their relations are established Under suitable conditions, the adjoint processes are characterized in terms of the adjoint vector fields, their differentials and Hessians, along the optimal state process Some other formulations of the partially observed stochastic maximum principle are then derived

172 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that the original stochastic control problem is equivalent to a linear program over a space of measures under a variety of optimality criteria, and an extension of Echeverria's theorem characterizing stationary distributions for (uncontrolled) Markov processes is obtained as a corollary.
Abstract: Given a solution of a controlled martingale problem it is shown under general conditions that there exists a solution having Markov controls which has the same cost as the original solution. This result is then used to show that the original stochastic control problem is equivalent to a linear program over a space of measures under a variety of optimality criteria. Existence and characterization of optimal Markov controls then follows. An extension of Echeverria's theorem characterizing stationary distributions for (uncontrolled) Markov processes is obtained as a corollary. In particular, this extension covers diffusion processes with discontinuous drift and diffusion coefficients.

153 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the optimal boundary control problem for the two-dimensional Navier-Stokes equations in an unbounded domain, and established the existence of an optimal solution over the control set.
Abstract: We study optimal boundary control problems for the two-dimensional Navier--Stokes equations in an unbounded domain. Control is effected through the Dirichlet boundary condition and is sought in a subset of the trace space of velocity fields with minimal regularity satisfying the energy estimates. An objective of interest is the drag functional. We first establish three important results for inhomogeneous boundary value problems for the Navier--Stokes equations; namely, we identify the trace space for the velocity fields possessing finite energy, we prove the existence of a solution for the Navier--Stokes equations with boundary data belonging to the trace space, and we identify the space in which the stress vector (along the boundary) of admissible solutions is well defined. Then, we prove the existence of an optimal solution over the control set. Finally, we justify the use of Lagrange multiplier principles, derive an optimality system of equations in the weak sense from which optimal states and controls may be determined, and prove that the optimality system of equations satisfies in appropriate senses a system of partial differential equations with boundary values.

143 citations


Journal ArticleDOI
TL;DR: In this article, a model reference adaptive control law is defined for nonlinear distributed parameter systems, where the reference model is assumed to be governed by a strongly coercive linear operator defined with respect to a Gelfand triple of reflexive Banach and Hilbert spaces.
Abstract: A model reference adaptive control law is defined for nonlinear distributed parameter systems. The reference model is assumed to be governed by a strongly coercive linear operator defined with respect to a Gelfand triple of reflexive Banach and Hilbert spaces. The resulting nonlinear closed-loop system is shown to be well posed. The tracking error is shown to converge to zero, and regularity results for the control input and the output are established. With an additional richness, or persistence of excitation assumption, the parameter error is shown to converge to zero as well. A finite-dimensional approximation theory is developed. Examples involving both first- and second-order, parabolic and hyperbolic, and linear and nonlinear systems are discussed, and numerical simulation results are presented.

136 citations


Journal ArticleDOI
TL;DR: In this paper, the Byrnes-Martin integral invariance principle for ordinary differential equations is extended to differential inclusions on {Bbb R}N. The extended result is applied in demonstrating the existence of adaptive stabilizers and servomechanisms for a variety of nonlinear system classes.
Abstract: The Byrnes--Martin integral invariance principle for ordinary differential equations is extended to differential inclusions on {Bbb R}N. The extended result is applied in demonstrating the existence of adaptive stabilizers and servomechanisms for a variety of nonlinear system classes.

Journal ArticleDOI
TL;DR: In this paper, the authors present a formulation of differential flatness in terms of absolute equivalence between exterior differential systems, and show that a system is differentially flat if and only if it is feedback linearizable via static state feedback.
Abstract: This paper presents a formulation of differential flatness---a concept originally introduced by Fliess, Levine, Martin, and Rouchon---in terms of absolute equivalence between exterior differential systems Systems that are differentially flat have several useful properties that can be exploited to generate effective control strategies for nonlinear systems The original definition of flatness was given in the context of differential algebra and required that all mappings be meromorphic functions The formulation of flatness presented here does not require any algebraic structure and allows one to use tools from exterior differential systems to help characterize differentially flat systems In particular, it is shown that, under regularity assumptions and in the case of single input control systems (ie, codimension 2 Pfaffian systems), a system is differentially flat if and only if it is feedback linearizable via static state feedback In higher codimensions our approach does not allow one to prove that feedback linearizability about an equilibrium point and flatness are equivalent: one must be careful with the role of time as well as the use of prolongations that may not be realizable as dynamic feedback in a control setting Applications of differential flatness to nonlinear control systems and open questions are also discussed

Journal ArticleDOI
TL;DR: A theory of optimal control is proposed to meet such design requirements for deterministic systems and it is shown that this framework generalizes some of the existing literature.
Abstract: In certain discrete event applications it may be desirable to find a particular controller, within the set of acceptable controllers, which optimizes some quantitative performance measure. In this paper we propose a theory of optimal control to meet such design requirements for deterministic systems. The discrete event system (DES) is modeled by a formal language. Event and cost functions are defined which induce costs on controlled system behavior. The event costs associated with the system behavior can be reduced, in general, only by increasing the control costs. Thus it is nontrivial to find the optimal amount of control to use, and the formulation captures the fundamental tradeoff motivating classical optimal control. Results on the existence of minimally restrictive optimal solutions are presented. Communication protocols are analyzed to motivate the formulation and demonstrate optimal controller synthesis. Algorithms for the computation of optimal controllers are developed for the special case of DES modeled by regular languages. It is shown that this framework generalizes some of the existing literature.

Journal ArticleDOI
TL;DR: Under reasonable, but more stringent, conditions on the quadratic model and on the trial steps, the sequence of iterates generated by the algorithms is shown to have a limit point satisfying the second-order necessary KKT conditions and the local rate of convergence to a nondegenerate strict local minimizer is q-quadratic.
Abstract: In this paper, a family of trust-region interior-point sequential quadratic programming (SQP) algorithms for the solution of a class of minimization problems with nonlinear equality constraints and simple bounds on some of the variables is described and analyzed. Such nonlinear programs arise, e.g., from the discretization of optimal control problems. The algorithms treat states and controls as independent variables. They are designed to take advantage of the structure of the problem. In particular they do not rely on matrix factorizations of the linearized constraints but use solutions of the linearized state equation and the adjoint equation. They are well suited for large scale problems arising from optimal control problems governed by partial differential equations. The algorithms keep strict feasibility with respect to the bound constraints by using an affine scaling method proposed, for a different class of problems, by Coleman and Li [SIAM J. Optim., 6 (1996), pp. 418--445] and they exploit trust-region techniques for equality-constrained optimization. Thus, they allow the computation of the steps using a variety of methods, including many iterative techniques. Global convergence of these algorithms to a first-order Karush--Kuhn--Tucker (KKT) limit point is proved under very mild conditions on the trial steps. Under reasonable, but more stringent, conditions on the quadratic model and on the trial steps, the sequence of iterates generated by the algorithms is shown to have a limit point satisfying the second-order necessary KKT conditions. The local rate of convergence to a nondegenerate strict local minimizer is q-quadratic. The results given here include, as special cases, current results for only equality constraints and for only simple bounds. Numerical results for the solution of an optimal control problem governed by a nonlinear heat equation are reported.

Journal ArticleDOI
TL;DR: In this paper, the normal forms and invariants of control systems with a parameter are found and the changes of properties such as controllability of the linearization or stabilizability near a bifurcation point of a control system are studied.
Abstract: The normal forms and invariants of control systems with a parameter are found. Bifurcations of equilibrium sets are classified. The changes of properties such as controllability of the linearization or stabilizability near a bifurcation point of a control system are studied.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a complete characterization of configuration flatness for systems with n degrees of freedom and n-1 controls whose range of control forces only depends on configuration and whose Lagrangian has the form of kinetic energy minus potential.
Abstract: Lagrangian control systems that are differentially flat with flat outputs that depend only on configuration variables are said to be configuration flat. We provide a complete characterization of configuration flatness for systems with n degrees of freedom and n-1 controls whose range of control forces only depends on configuration and whose Lagrangian has the form of kinetic energy minus potential. The method presented allows us to determine if such a system is configuration flat and, if so, provides a constructive method for finding all possible configuration flat outputs. Our characterization relates configuration flatness to Riemannian geometry. We illustrate the method with two examples.

Journal ArticleDOI
TL;DR: In this article, conditions on the system matrices are presented that guarantee that there exists a positive linear observer such that both the error converges to zero and the estimate is positive.
Abstract: Linear compartmental systems are mathematical systems that are frequently used in biology and mathematics. The inputs, states, and outputs of such systems are positive, because they denote amounts or concentrations of material. For linear dynamic systems the observer problem has been solved. The purpose of the observer problem is to determine a linear observer such that the state can be approximated. The difference between the state and its estimate should converge to zero. The interpretation in terms of a physical system requires that an estimate of the state be positive, like the state itself. In this paper conditions on the system matrices are presented that guarantee that there exists a positive linear observer such that both the error converges to zero and the estimate is positive.

Journal ArticleDOI
TL;DR: In this article, the notion of approximate Jacobian matrices is introduced for a continuous vector-valued map, based on the idea of convexificators of real-valued functions.
Abstract: The notion of approximate Jacobian matrices is introduced for a continuous vector-valued map. It is shown, for instance, that the Clarke generalized Jacobian is an approximate Jacobian for a locally Lipschitz map. The approach is based on the idea of convexificators of real-valued functions. Mean value conditions for continuous vector-valued maps and Taylor's expansions for continuously Gâteaux differentiable functions (i.e., C1-functions) are presented in terms of approximate Jacobians and approximate Hessians, respectively. Second-order necessary and sufficient conditions for optimality and convexity of C1-functions are also given.

Journal ArticleDOI
TL;DR: In this paper, the initial value problem with boundary control for a scalar nonlinear conservation law is considered, and the authors give a characterization of the set of attainable profiles at a fixed time $T>0$ and a fixed point $B x >0$.
Abstract: We consider the initial value problem with boundary control for a scalar nonlinear conservation law \begin{equation*} u_t+[f(u)]_x=0,\qquad\qquad u(0,x)=0,\qquad u(\cdot,0)= \tilde u\in{\cal U},\tag{$\ast$} \end{equation*} on the domain $\Omega=\{(t,x)\in\real^2: t\geq 0, x\geq 0\}$. Here $u=u(t,x)$ is the state variable, ${\cal U}$ is a set of bounded boundary data regarded as controls, and $f$ is assumed to be strictly convex. We give a characterization of the set of attainable profiles at a fixed time $T>0$ and at a fixed point $\bar x>0$: \begin{equation*} \begin{aligned} \rag &=\{u(T,\cdot): u \hbox{is a solution of} (\ast)\},\\ \ragx &=\{u(\cdot,\bar x): u \hbox{is a solution of} (\ast)\}, \end{aligned} \qquad\quad{\cal U}= \ellein({\Bbb R}^+). \end{equation*} Moreover we prove that $\rag$ and $\ragx$ are compact subsets of $\elleuno$ and $\elleuno_{loc}$, respectively, whenever ${\cal U}$ is a set of controls which pointwise satisfy closed convex constraints, together with some additional integral inequalities.

Journal ArticleDOI
TL;DR: In this article, the authors show that the minimal time function TS(cdot) is a proximal solution to the Hamilton-Jacobi equation and give necessary and sufficient conditions for TS to be Lipschitz continuous near the target set.
Abstract: Under general hypotheses on the target set S and the dynamics of the system, we show that the minimal time function TS(\cdot) is a proximal solution to the Hamilton--Jacobi equation. Uniqueness results are obtained with two different kinds of boundary conditions. A new propagation result is proven, and as an application, we give necessary and sufficient conditions for TS(\cdot) to be Lipschitz continuous near S. A Petrov-type modulus condition is also shown to be sufficient for continuity of TS(\cdot) near S.

Journal ArticleDOI
TL;DR: In this article, a Pontryagin's minimum principle in qualified form was proved for optimal control problems with semilinear parabolic equations with pointwise state constraints and unbounded control.
Abstract: This paper deals with optimal control problems governed by semilinear parabolic equations with pointwise state constraints and unbounded controls. Under some strong stability assumption, we obtain necessary optimality conditions in the form of a Pontryagin's minimum principle in qualified form. A Pontryagin's principle in nonqualified form is also proved without any stability condition.

Journal ArticleDOI
TL;DR: In this paper, it was shown that if player B rests, then the optimal strategy of player A is a mixture of geometric trajectories, and this trajectory oscillates with a geometrically increasing amplitude.
Abstract: Two players A and B are randomly placed on a line. The distribution of the distance between them is unknown except that the expected initial distance of the (two) players does not exceed some constant $\mu.$ The players can move with maximal velocity 1 and would like to meet one another as soon as possible. Most of the paper deals with the asymmetric rendezvous in which each player can use a different trajectory. We find rendezvous trajectories which are efficient against all probability distributions in the above class. (It turns out that our trajectories do not depend on the value of $\mu.$) We also obtain the minimax trajectory of player A if player B just waits for him. This trajectory oscillates with a geometrically increasing amplitude. It guarantees an expected meeting time not exceeding $6.8\mu.$ We show that, if player B also moves, then the expected meeting time can be reduced to $5.7\mu.$ The expected meeting time can be further reduced if the players use mixed strategies. We show that if player B rests, then the optimal strategy of player A is a mixture of geometric trajectories. It guarantees an expected meeting time not exceeding $4.6\mu.$ This value can be reduced even more (below $4.42\mu$) if player B also moves according to a (correlated) mixed strategy. We also obtain a bound for the expected meeting time of the corresponding symmetric rendezvous problem.

Journal ArticleDOI
TL;DR: In this paper, a cost-biased parameter estimator is introduced to overcome the identifiability problem in adaptive control, and the corresponding adaptive scheme is proven to be stable and optimal when the unknown system parameter lies in an infinite, yet compact, parameter set.
Abstract: In adaptive control, a standard approach is to resort to the so-called certainty equivalence principle which consists of generating some standard parameter estimate and then using it in the control law as if it were the true parameter. As a consequence of this philosophy, the estimation problem is decoupled from the control problem and this substantially simplifies the corresponding adaptive control scheme. On the other hand, the complete absence of dual properties makes certainty equivalent controllers run into an identifiability problem which generally leads to a strictly suboptimal performance. In this paper, we introduce a cost-biased parameter estimator to overcome this difficulty. This estimator is applied to a linear quadratic Gaussian controller. The corresponding adaptive scheme is proven to be stable and optimal when the unknown system parameter lies in an infinite, yet compact, parameter set.

Journal ArticleDOI
TL;DR: In this article, a penalized Neumann boundary control approach for solving an optimal Dirichlet boundary control problem associated with the two- or three-dimensional steady-state Navier-Stokes equations is introduced.
Abstract: We introduce a penalized Neumann boundary control approach for solving an optimal Dirichlet boundary control problem associated with the two- or three-dimensional steady-state Navier--Stokes equations. We prove the convergence of the solutions of the penalized Neumann control problem, the suboptimality of the limit, and the optimality of the limit under further restrictions on the data. We describe the numerical algorithm for solving the penalized Neumann control problem and report some numerical results.

Journal ArticleDOI
TL;DR: In this article, the stability theorems of Filippov's type in the convex and nonconvex case are proved under a one-sided Lipschitz condition.
Abstract: Ordinary differential and functional-differential inclusions with compact right-hand sides are considered. Stability theorems of Filippov's type in the convex and nonconvex case are proved under a one-sided Lipschitz condition, which extends the notions of Lipschitz continuity, dissipativity, and the uniform one-sided Lipschitz condition for set-valued mappings. The accuracy of approximation of the solution sets by means of the Euler discretization scheme for both types of inclusions is estimated.

Journal ArticleDOI
TL;DR: In this paper, it is shown that any near-optimal control nearly maximizes the ''HH$-function'' in some integral sense, and vice versa if certain additional concavity conditions are imposed.
Abstract: Near-optimization is as sensible and important as optimization for both theory and applications. This paper concerns dynamic near-optimization, or near-optimal controls, for systems governed by the Ito stochastic differential equations (SDEs), where both the drift and diffusion terms are allowed to depend on controls and the systems are allowed to be degenerate. Necessary and sufficient conditions for a control to be near-optimal are studied. It is shown that any near-optimal control nearly maximizes the "$\HH$-function" (which is a generalization of the usual Hamiltonian and is quadratic with respect to the diffusion coefficients) in some integral sense, and vice versa if certain additional concavity conditions are imposed. Error estimates for both the near-optimality of the controls and the near-maximum of the $\HH$-function are obtained, based on some delicate estimates of the adjoint processes. Examples are presented to demonstrate the results.

Journal ArticleDOI
TL;DR: In this article, the identification of linear dynamic systems in situations when inputs and outputs may be contaminated by noise is considered, in particular the underlying system is not uniquely determined from the population second moments of the observations.
Abstract: We deal with problems connected with the identification of linear dynamic systems in situations when inputs and outputs may be contaminated by noise. The case of uncorrelated noise components and the bounded noise case is considered. If also the inputs may be contaminated by noise, a number of additional complications in identification arise, in particular the underlying system is not uniquely determined from the population second moments of the observations. A description of classes of observationally equivalent systems is given, continuity properties of mappings relating classes of observationally equivalent systems to the spectral densities of the observations are derived and the classes of spectral densities corresponding to a given maximum number of outputs are studied.

Journal ArticleDOI
TL;DR: In this paper, the authors derived the uniform energy decay rates for the model without the above-mentioned restrictions and showed that simple, monotone nonlinear feedback (without the tangential derivatives of the horizontal displacements) provides the uniform decay rate for the energy.
Abstract: Full von Karman system accounting for in-plane accelerations and describing the transient deformations of a thin, elastic plate subject to edge loading is considered. The energy dissipation is introduced via the nonlinear velocity feedback acting on a part of the edge of the plate. It is known [J. Puel and M. Tucsnak, SIAM J. Control Optim., 33 (1995), pp. 255--273] that in the case of linear dissipation and "star-shaped" domains, boundary velocity feedback with the tangential derivatives of horizontal displacements leads to the exponential decay rates for the energy of the resulting closed loop system. The main goal of the paper is to derive the uniform energy decay rates valid for the model without the above-mentioned restrictions. In particular, it is shown that simple, monotone nonlinear feedback (without the tangential derivatives of the horizontal displacements) provides the uniform decay rates for the energy in the absence of geometric hypotheses imposed on the controlled part of the boundary. This is accomplished by establishing, among other things, "sharp" regularity results valid for the boundary traces of solutions corresponding to this nonlinear model and by employing a Holmgren-type uniqueness result proved recently in [V. Isakov, J. Differential Equations, 97 (1997), pp. 134--147] for the dynamical systems of elasticity which are overdetermined on the boundary.

Journal ArticleDOI
TL;DR: For a nonlinear optimal control problem with state constraints, this paper gave conditions under which the optimal control depends Lipschitz continuously in the L 2 norm on a parameter, and obtained a new non-optimal stability result for optimal control in the $L^\infty$ norm.
Abstract: For a nonlinear optimal control problem with state constraints, we give conditions under which the optimal control depends Lipschitz continuously in the L2 norm on a parameter. These conditions involve smoothness of the problem data, uniform independence of active constraint gradients, and a coercivity condition for the integral functional. Under these same conditions, we obtain a new nonoptimal stability result for the optimal control in the $L^\infty$ norm. And under an additional assumption concerning the regularity of the state constraints, a new tight $L^\infty$ estimate is obtained. Our approach is based on an abstract implicit function theorem in nonlinear spaces.

Journal ArticleDOI
TL;DR: In this paper, a topological equivalence relation on the corresponding time-optimal feedback synthesis was introduced, and the set of equivalence classes can be put in a one-to-one correspondence with a discrete family of graphs.
Abstract: Consider the problem of stabilization at the origin in minimum time for a planar control system affine with respect to the control. For a family of generic vector fields, a topological equivalence relation on the corresponding time-optimal feedback synthesis was introduced in a previous paper [Dynamics of Continuous, Discrete and Impulsive Systems, 3 (1997), pp. 335--371]. The set of equivalence classes can be put in a one-to-one correspondence with a discrete family of graphs. This provides a classification of the global structure of generic time-optimal stabilizing feedbacks in the plane, analogous to the classification of smooth dynamical systems developed by Peixoto.