scispace - formally typeset
Search or ask a question
Book

Stochastic controls : Hamiltonian systems and HJB equations

TL;DR: In this article, the authors consider the problem of deterministic control problems in the context of stochastic control systems and show that the optimal control problem can be formulated in a deterministic manner.
Abstract: 1. Basic Stochastic Calculus.- 1. Probability.- 1.1. Probability spaces.- 1.2. Random variables.- 1.3. Conditional expectation.- 1.4. Convergence of probabilities.- 2. Stochastic Processes.- 2.1. General considerations.- 2.2. Brownian motions.- 3. Stopping Times.- 4. Martingales.- 5. Ito's Integral.- 5.1. Nondifferentiability of Brownian motion.- 5.2. Definition of Ito's integral and basic properties.- 5.3. Ito's formula.- 5.4. Martingale representation theorems.- 6. Stochastic Differential Equations.- 6.1. Strong solutions.- 6.2. Weak solutions.- 6.3. Linear SDEs.- 6.4. Other types of SDEs.- 2. Stochastic Optimal Control Problems.- 1. Introduction.- 2. Deterministic Cases Revisited.- 3. Examples of Stochastic Control Problems.- 3.1. Production planning.- 3.2. Investment vs. consumption.- 3.3. Reinsurance and dividend management.- 3.4. Technology diffusion.- 3.5. Queueing systems in heavy traffic.- 4. Formulations of Stochastic Optimal Control Problems.- 4.1. Strong formulation.- 4.2. Weak formulation.- 5. Existence of Optimal Controls.- 5.1. A deterministic result.- 5.2. Existence under strong formulation.- 5.3. Existence under weak formulation.- 6. Reachable Sets of Stochastic Control Systems.- 6.1. Nonconvexity of the reachable sets.- 6.2. Noncloseness of the reachable sets.- 7. Other Stochastic Control Models.- 7.1. Random duration.- 7.2. Optimal stopping.- 7.3. Singular and impulse controls.- 7.4. Risk-sensitive controls.- 7.5. Ergodic controls.- 7.6. Partially observable systems.- 8. Historical Remarks.- 3. Maximum Principle and Stochastic Hamiltonian Systems.- 1. Introduction.- 2. The Deterministic Case Revisited.- 3. Statement of the Stochastic Maximum Principle.- 3.1. Adjoint equations.- 3.2. The maximum principle and stochastic Hamiltonian systems.- 3.3. A worked-out example.- 4. A Proof of the Maximum Principle.- 4.1. A moment estimate.- 4.2. Taylor expansions.- 4.3. Duality analysis and completion of the proof.- 5. Sufficient Conditions of Optimality.- 6. Problems with State Constraints.- 6.1. Formulation of the problem and the maximum principle.- 6.2. Some preliminary lemmas.- 6.3. A proof of Theorem 6.1.- 7. Historical Remarks.- 4. Dynamic Programming and HJB Equations.- 1. Introduction.- 2. The Deterministic Case Revisited.- 3. The Stochastic Principle of Optimality and the HJB Equation.- 3.1. A stochastic framework for dynamic programming.- 3.2. Principle of optimality.- 3.3. The HJB equation.- 4. Other Properties of the Value Function.- 4.1. Continuous dependence on parameters.- 4.2. Semiconcavity.- 5. Viscosity Solutions.- 5.1. Definitions.- 5.2. Some properties.- 6. Uniqueness of Viscosity Solutions.- 6.1. A uniqueness theorem.- 6.2. Proofs of Lemmas 6.6 and 6.7.- 7. Historical Remarks.- 5. The Relationship Between the Maximum Principle and Dynamic Programming.- 1. Introduction.- 2. Classical Hamilton-Jacobi Theory.- 3. Relationship for Deterministic Systems.- 3.1. Adjoint variable and value function: Smooth case.- 3.2. Economic interpretation.- 3.3. Methods of characteristics and the Feynman-Kac formula.- 3.4. Adjoint variable and value function: Nonsmooth case.- 3.5. Verification theorems.- 4. Relationship for Stochastic Systems.- 4.1. Smooth case.- 4.2. Nonsmooth case: Differentials in the spatial variable.- 4.3. Nonsmooth case: Differentials in the time variable.- 5. Stochastic Verification Theorems.- 5.1. Smooth case.- 5.2. Nonsmooth case.- 6. Optimal Feedback Controls.- 7. Historical Remarks.- 6. Linear Quadratic Optimal Control Problems.- 1. Introduction.- 2. The Deterministic LQ Problems Revisited.- 2.1. Formulation.- 2.2. A minimization problem of a quadratic functional.- 2.3. A linear Hamiltonian system.- 2.4. The Riccati equation and feedback optimal control.- 3. Formulation of Stochastic LQ Problems.- 3.1. Statement of the problems.- 3.2. Examples.- 4. Finiteness and Solvability.- 5. A Necessary Condition and a Hamiltonian System.- 6. Stochastic Riccati Equations.- 7. Global Solvability of Stochastic Riccati Equations.- 7.1. Existence: The standard case.- 7.2. Existence: The case C = 0, S = 0, and Q, G ?0.- 7.3. Existence: The one-dimensional case.- 8. A Mean-variance Portfolio Selection Problem.- 9. Historical Remarks.- 7. Backward Stochastic Differential Equations.- 1. Introduction.- 2. Linear Backward Stochastic Differential Equations.- 3. Nonlinear Backward Stochastic Differential Equations.- 3.1. BSDEs in finite deterministic durations: Method of contraction mapping.- 3.2. BSDEs in random durations: Method of continuation.- 4. Feynman-Kac-Type Formulae.- 4.1. Representation via SDEs.- 4.2. Representation via BSDEs.- 5. Forward-Backward Stochastic Differential Equations.- 5.1. General formulation and nonsolvability.- 5.2. The four-step scheme, a heuristic derivation.- 5.3. Several solvable classes of FBSDEs.- 6. Option Pricing Problems.- 6.1. European call options and the Black--Scholes formula.- 6.2. Other options.- 7. Historical Remarks.- References.
Citations
More filters
Book
01 Dec 1992
TL;DR: In this paper, the existence and uniqueness of nonlinear equations with additive and multiplicative noise was investigated. But the authors focused on the uniqueness of solutions and not on the properties of solutions.
Abstract: Part I. Foundations: 1. Random variables 2. Probability measures 3. Stochastic processes 4. The stochastic integral Part II. Existence and Uniqueness: 5. Linear equations with additive noise 6. Linear equations with multiplicative noise 7. Existence and uniqueness for nonlinear equations 8. Martingale solutions Part III. Properties of Solutions: 9. Markov properties and Kolmogorov equations 10. Absolute continuity and Girsanov's theorem 11. Large time behaviour of solutions 12. Small noise asymptotic.

4,042 citations

Journal ArticleDOI
TL;DR: In this article, a continuous-time mean-variance portfolio selection problem is formulated as a bicriteria optimization problem, where the objective is to maximize the expected terminal return and minimize the variance of the terminal wealth.
Abstract: This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be ``embedded'' into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem.

979 citations

Book ChapterDOI
Shige Peng1
TL;DR: In this article, the authors introduce a nonlinear expectation generated by a heat equation with infinitesimal generator G. The G-standard normal distribution is introduced and the canonical process is a G-Brownian motion.
Abstract: We introduce a notion of nonlinear expectation --G--expectation-- generated by a nonlinear heat equation with infinitesimal generator G. We first discuss the notion of G-standard normal distribution. With this nonlinear distribution we can introduce our G-expectation under which the canonical process is a G--Brownian motion. We then establish the related stochastic calculus, especially stochastic integrals of Ito's type with respect to our G--Brownian motion and derive the related Ito's formula. We have also give the existence and uniqueness of stochastic differential equation under our G-expectation. As compared with our previous framework of g-expectations, the theory of G-expectation is intrinsic in the sense that it is not based on a given (linear) probability space.

653 citations


Additional excerpts

  • ...We consider the following type of simple processes: for a given partition {t0, · · · , tN} = πT of [0, T ], we set ηt(ω) = N−1 ∑ j=0 ξj(ω)I[tj ,tj+1)(t), where ξi ∈ LpG(Fti), i = 0, 1, 2, · · · , N − 1, are given....

    [...]

  • ...Remark 26 We set, for each η ∈M1,0G (0, T ), ẼT [η] := 1 T ∫ T 0 E[ηt]dt = 1 T N−1 ∑ j=0 E[ξj(ω)](tj+1 − tj)....

    [...]

Posted Content
TL;DR: In this paper, a new approach of sublinear expectation is introduced to deal with the problem of probability and distribution model uncertainty, and a new type of normal distributions and the related central limit theorem under sublinear expectations are presented.
Abstract: In this book, we introduce a new approach of sublinear expectation to deal with the problem of probability and distribution model uncertainty. We a new type of (robust) normal distributions and the related central limit theorem under sublinear expectation. We also present a new type of Brownian motion under sublinear expectations and the related stochastic calculus of Ito's type. The results provide robust tools for the problem of probability model uncertainty arising from financial risk management, statistics and stochastic controls.

652 citations


Cites background or methods from "Stochastic controls : Hamiltonian s..."

  • ...We also refer to Yong and Zhou (1999) [124], as well as in Peng (1997) [93] (in 1997, in Chinese) and (2004) [95] for systematic presentations of BSDE theory....

    [...]

  • ...For books on the theory of viscosity solutions and the related HJB equations, see Barles (1994) [8], Fleming and Soner (1992) [49] as well as Yong and Zhou (1999) [124]....

    [...]

  • ...…[25], Dellacherie and Meyer (1978 and 1982) [33], He, Wang and Yan (1992) [57], Itô and McKean (1965) [66], Ikeda and Watanabe (1981) [63], Kallenberg (2002) [72], Karatzas and Shreve (1988) [73], Øksendal (1998) [87], Protter (1990) [110], Revuz and Yor (1999)[111] and Yong and Zhou (1999) [124]....

    [...]

Journal ArticleDOI
TL;DR: A continuous-time version of the Markowitz mean-variance portfolio selection model is proposed and analyzed for a market consisting of one bank account and multiple stocks, finding that if the interest rate is deterministic, then the results exhibit (rather unexpected) similarity to their no-regime-switching counterparts, even if the stock appreciation and volatility rates are Markov-modulated.
Abstract: A continuous-time version of the Markowitz mean-variance portfolio selection model is proposed and analyzed for a market consisting of one bank account and multiple stocks. The market parameters, including the bank interest rate and the appreciation and volatility rates of the stocks, depend on the market mode that switches among a finite number of states. The random regime switching is assumed to be independent of the underlying Brownian motion. This essentially renders the underlying market incomplete. A Markov chain modulated diffusion formulation is employed to model the problem. Using techniques of stochastic linear-quadratic control, mean-variance efficient portfolios and efficient frontiers are derived explicitly in closed forms, based on solutions of two systems of linear ordinary differential equations. Related issues such as a minimum-variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those for the case when there is no regime switching. An interesting observation is, however, that if the interest rate is deterministic, then the results exhibit (rather unexpected) similarity to their no-regime-switching counterparts, even if the stock appreciation and volatility rates are Markov-modulated.

486 citations


Cites background or methods from "Stochastic controls : Hamiltonian s..."

  • ...Using the recently developed stochastic linear-quadratic (LQ) control framework [4, 5, 29], Zhou and Li [31] studied the mean-variance problem for a continuous-time model from another angel....

    [...]

  • ...The theory of stochastic control is rich, and many mathematical machineries are available; see Fleming and Soner [9] and Yong and Zhou [29], which provides an opportunity for treating more complicated situations....

    [...]

  • ...R ed is tr ib ut io n su bj ec t t o SI A M li ce ns e or c op yr ig ht ; s ee h ttp :// w w w .s ia m .o rg /jo ur na ls Elliott [3], and Yao, Zhang, and Zhou [26]....

    [...]