scispace - formally typeset
Search or ask a question
Author

Mark H. A. Davis

Bio: Mark H. A. Davis is an academic researcher from Imperial College London. The author has contributed to research in topics: Stochastic control & Optimal control. The author has an hindex of 42, co-authored 172 publications receiving 8806 citations. Previous affiliations of Mark H. A. Davis include Mitsubishi & University of California, Berkeley.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that the optimal buying and selling policies are the local times of the two-dimensional process of bank and stock holdings at the boundaries of a wedge-shaped region which is determined by the solution of a nonlinear free boundary problem.
Abstract: In this paper, optimal consumption and investment decisions are studied for an investor who has available a bank account paying a fixed rate of interest and a stock whose price is a log-normal diffusion. This problem was solved by Merton and others when transactions between bank and stock are costless. Here we suppose that there are charges on all transactions equal to a fixed percentage of the amount transacted. It is shown that the optimal buying and selling policies are the local times of the two-dimensional process of bank and stock holdings at the boundaries of a wedge-shaped region which is determined by the solution of a nonlinear free boundary problem. An algorithm for solving the free boundary problem is given.

1,320 citations

Book
01 Jan 1993
TL;DR: In this article, the authors present a new approach to problems of evaluating and optimizing the performance of continuous-time stochastic systems, based on the use of a family of Markov processes called Piecewise-Deterministic Processes (PDPs) as a general class of stocha- system models.
Abstract: This book presents a radically new approach to problems of evaluating and optimizing the performance of continuous-time stochastic systems. This approach is based on the use of a family of Markov processes called Piecewise-Deterministic Processes (PDPs) as a general class of stochastic system models. A PDP is a Markov process that follows deterministic trajectories between random jumps, the latter occurring either spontaneously, in a Poisson-like fashion, or when the process hits the boundary of its state space. This formulation includes an enormous variety of applied problems in engineering, operations research, management science and economics as special cases; examples include queueing systems, stochastic scheduling, inventory control, resource allocation problems, optimal planning of production or exploitation of renewable or non-renewable resources, insurance analysis, fault detection in process systems, and tracking of maneuvering targets, among many others.The first part of the book shows how these applications lead to the PDP as a system model, and the main properties of PDPs are derived. There is particular emphasis on the so-called extended generator of the process, which gives a general method for calculating expectations and distributions of system performance functions. The second half of the book is devoted to control theory for PDPs, with a view to controlling PDP models for optimal performance: characterizations are obtained of optimal strategies both for continuously-acting controllers and for control by intervention (impulse control). Throughout the book, modern methods of stochastic analysis are used, but all the necessary theory is developed from scratch and presented in a self-contained way. The book will be useful to engineers and scientists in the application areas as well as to mathematicians interested in applications of stochastic analysis.

1,255 citations

Journal ArticleDOI
TL;DR: Stochastic calculus for these stochastic processes is developed and a complete characterization of the extended generator is given; this is the main technical result of the paper.
Abstract: A general class of non-diffusion stochastic models is introduced with a view to providing a framework for studying optimization problems arising in queueing systems, inventory theory, resource allocation and other areas. The corresponding stochastic processes are Markov processes consisting of a mixture of deterministic motion and random jumps. Stochastic calculus for these processes is developed and a complete characterization of the extended generator is given; this is the main technical result of the paper. The relevance of the extended generator concept in applied problems is discussed and some recent results on optimal control of piecewise-deterministic processes are described.

954 citations

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of pricing European options in a market model similar to the Black-Scholes one, except that proportional transaction charges are levied on all sales and purchases.
Abstract: The authors consider the problem of pricing European options in a market model similar to the Black–Scholes one, except that proportional transaction charges are levied on all sales and purchases o...

499 citations

Book
01 Jan 1977
TL;DR: Linear estimation theory. Hilbert space. Othogonal increments processes. Stochastic linear regulator. Separation principle. Linear stochastic control and dynamic programming as discussed by the authors.
Abstract: Linear estimation theory. Hilbert space. Stochastic processes. Hilbert space. Othogonal increments processes. Linear stochastic control. Dynamic programming. Stochastic linear regulator. Separation principle.

371 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Book
18 Dec 1992
TL;DR: In this paper, an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions is given, as well as a concise introduction to two-controller, zero-sum differential games.
Abstract: This book is intended as an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions. The authors approach stochastic control problems by the method of dynamic programming. The text provides an introduction to dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. A new Chapter X gives an introduction to the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets. Chapter VI of the First Edition has been completely rewritten, to emphasize the relationships between logarithmic transformations and risk sensitivity. A new Chapter XI gives a concise introduction to two-controller, zero-sum differential games. Also covered are controlled Markov diffusions and viscosity solutions of Hamilton-Jacobi-Bellman equations. The authors have tried, through illustrative examples and selective material, to connect stochastic control theory with other mathematical areas (e.g. large deviations theory) and with applications to engineering, physics, management, and finance. In this Second Edition, new material on applications to mathematical finance has been added. Concise introductions to risk-sensitive control theory, nonlinear H-infinity control and differential games are also included.

3,885 citations

Book
01 Jun 1979
TL;DR: In this article, an augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems, with step-by-step explanations that show clearly how to make practical use of the material.
Abstract: This augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems. It explores linear optimal control theory from an engineering viewpoint, with step-by-step explanations that show clearly how to make practical use of the material. The three-part treatment begins with the basic theory of the linear regulator/tracker for time-invariant and time-varying systems. The Hamilton-Jacobi equation is introduced using the Principle of Optimality, and the infinite-time problem is considered. The second part outlines the engineering properties of the regulator. Topics include degree of stability, phase and gain margin, tolerance of time delay, effect of nonlinearities, asymptotic properties, and various sensitivity problems. The third section explores state estimation and robust controller design using state-estimate feedback. Numerous examples emphasize the issues related to consistent and accurate system design. Key topics include loop-recovery techniques, frequency shaping, and controller reduction, for both scalar and multivariable systems. Self-contained appendixes cover matrix theory, linear systems, the Pontryagin minimum principle, Lyapunov stability, and the Riccati equation. Newly added to this Dover edition is a complete solutions manual for the problems appearing at the conclusion of each section.

3,254 citations

Book
01 Jan 2004
TL;DR: In this paper, the authors present a general theory of Levy processes and a stochastic calculus for Levy processes in a direct and accessible way, including necessary and sufficient conditions for Levy process to have finite moments.
Abstract: Levy processes form a wide and rich class of random process, and have many applications ranging from physics to finance Stochastic calculus is the mathematics of systems interacting with random noise Here, the author ties these two subjects together, beginning with an introduction to the general theory of Levy processes, then leading on to develop the stochastic calculus for Levy processes in a direct and accessible way This fully revised edition now features a number of new topics These include: regular variation and subexponential distributions; necessary and sufficient conditions for Levy processes to have finite moments; characterisation of Levy processes with finite variation; Kunita's estimates for moments of Levy type stochastic integrals; new proofs of Ito representation and martingale representation theorems for general Levy processes; multiple Wiener-Levy integrals and chaos decomposition; an introduction to Malliavin calculus; an introduction to stability theory for Levy-driven SDEs

2,908 citations