scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Piecewise-Deterministic Markov Processes: A General Class of Non-Diffusion Stochastic Models

01 Jul 1984-Journal of the royal statistical society series b-methodological (John Wiley & Sons, Ltd)-Vol. 46, Iss: 3, pp 353-376
TL;DR: Stochastic calculus for these stochastic processes is developed and a complete characterization of the extended generator is given; this is the main technical result of the paper.
Abstract: A general class of non-diffusion stochastic models is introduced with a view to providing a framework for studying optimization problems arising in queueing systems, inventory theory, resource allocation and other areas. The corresponding stochastic processes are Markov processes consisting of a mixture of deterministic motion and random jumps. Stochastic calculus for these processes is developed and a complete characterization of the extended generator is given; this is the main technical result of the paper. The relevance of the extended generator concept in applied problems is discussed and some recent results on optimal control of piecewise-deterministic processes are described.
Citations
More filters
Book
18 Dec 1992
TL;DR: In this paper, an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions is given, as well as a concise introduction to two-controller, zero-sum differential games.
Abstract: This book is intended as an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions. The authors approach stochastic control problems by the method of dynamic programming. The text provides an introduction to dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. A new Chapter X gives an introduction to the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets. Chapter VI of the First Edition has been completely rewritten, to emphasize the relationships between logarithmic transformations and risk sensitivity. A new Chapter XI gives a concise introduction to two-controller, zero-sum differential games. Also covered are controlled Markov diffusions and viscosity solutions of Hamilton-Jacobi-Bellman equations. The authors have tried, through illustrative examples and selective material, to connect stochastic control theory with other mathematical areas (e.g. large deviations theory) and with applications to engineering, physics, management, and finance. In this Second Edition, new material on applications to mathematical finance has been added. Concise introductions to risk-sensitive control theory, nonlinear H-infinity control and differential games are also included.

3,885 citations

Journal ArticleDOI
TL;DR: In this paper, the authors developed criteria for continuous-parameter Markovian processes on general state spaces, based on Foster-Lyapunov inequalities for the extended generator, and applied the criteria to several specific processes, including linear stochastic systems under nonlinear feedback, work-modulated queues, general release storage processes and risk processes.
Abstract: In Part I we developed stability concepts for discrete chains, together with Foster–Lyapunov criteria for them to hold. Part II was devoted to developing related stability concepts for continuous-time processes. In this paper we develop criteria for these forms of stability for continuous-parameter Markovian processes on general state spaces, based on Foster-Lyapunov inequalities for the extended generator. Such test function criteria are found for non-explosivity, non-evanescence, Harris recurrence, and positive Harris recurrence. These results are proved by systematic application of Dynkin's formula. We also strengthen known ergodic theorems, and especially exponential ergodic results, for continuous-time processes. In particular we are able to show that the test function approach provides a criterion for f-norm convergence, and bounding constants for such convergence in the exponential ergodic case. We apply the criteria to several specific processes, including linear stochastic systems under non-linear feedback, work-modulated queues, general release storage processes and risk processes.

1,000 citations

01 Jan 2001
TL;DR: Mostafa Adimy as mentioned in this paper Directeur de Recherches à l’INRIA Dir. de thèse Ionel S. CIUPERCA Mâıtre de Conférence à l'Université Lyon 1 Examinateur Michael C. MACKEY Directeur of Recherche et al.
Abstract: Mostafa ADIMY Directeur de Recherches à l’INRIA Dir. de thèse Ionel S. CIUPERCA Mâıtre de Conférence à l’Université Lyon 1 Examinateur Michael C. MACKEY Directeur de Recherche à l’Université Mc GIll Dir. de thèse Sylvie MÉLÉARD Professeur à l’Ecole Polytechnique Examinatrice Sophie MERCIER Professeur à l’Université de Pau et des Pays de l’Adour Rapportrice Laurent PUJO-MENJOUET Mâıtre de Conférence à l’Université Lyon 1 Dir. de thèse Marta TYRAN-KAMIŃSKA Professeur à l’University of Silesia Examinatrice Bernard YCART Professeur à l’Université de Grenoble Rapporteur

713 citations

Journal ArticleDOI
TL;DR: An age of information timeliness metric is formulated and a general result for the AoI that is applicable to a wide variety of multiple source service systems is derived that makes AoI evaluation to be comparable in complexity to finding the stationary distribution of a finite-state Markov chain.
Abstract: We examine multiple independent sources providing status updates to a monitor through simple queues. We formulate an age of information (AoI) timeliness metric and derive a general result for the AoI that is applicable to a wide variety of multiple source service systems. For first-come first-served and two types of last-come first-served systems with Poisson arrivals and exponential service times, we find the region of feasible average status ages for multiple updating sources. We then use these results to characterize how a service facility can be shared among multiple updating sources. A new simplified technique for evaluating the AoI in finite-state continuous-time queuing systems is also derived. Based on stochastic hybrid systems, this method makes AoI evaluation to be comparable in complexity to finding the stationary distribution of a finite-state Markov chain.

552 citations


Cites methods from "Piecewise-Deterministic Markov Proc..."

  • ...We will see that AoI tracking can be implemented as a simplified SHS with non-negative linear reset maps in which the continuous state is a piecewise linear process [32], [33], a special case of piecewise deterministic processes [34], [35]....

    [...]

Journal ArticleDOI
TL;DR: In this article, a Markov continuous Markov stochastic process is used to make inference on a partially observed monotone stochastically varying rate of sedimentation in lake sediment cores.
Abstract: Summary. We propose a new and simple continuous Markov monotone stochastic process and use it to make inference on a partially observed monotone stochastic process. The process is piecewise linear, based on additive independent gamma increments arriving in a Poisson fashion. An independent increments variation allows very simple conditional simulation of sample paths given known values of the process. We take advantage of a reparameterization involving the Tweedie distribution to provide efficient computation. The motivating problem is the establishment of a chronology for samples taken from lake sediment cores, i.e. the attribution of a set of dates to samples of the core given their depths, knowing that the age–depth relationship is monotone. The chronological information arises from radiocarbon (14C) dating at a subset of depths. We use the process to model the stochastically varying rate of sedimentation.

454 citations


Cites background from "Piecewise-Deterministic Markov Proc..."

  • ...One natural family of stochastic processes is those which are described as ‘partially deterministic Markov processes’ (Davis, 1984)....

    [...]

  • ...The process that is proposed here is not a diffusion; it may in fact be seen as an example of a ‘partially deterministic Markov process’ (Davis, 1984)....

    [...]

References
More filters
Book
17 Nov 1975
TL;DR: In this paper, the authors considered the problem of optimal control of Markov diffusion processes in the context of calculus of variations, and proposed a solution to the problem by using the Euler Equation Extremals.
Abstract: I The Simplest Problem in Calculus of Variations.- 1. Introduction.- 2. Minimum Problems on an Abstract Space-Elementary Theory.- 3. The Euler Equation Extremals.- 4. Examples.- 5. The Jacobi Necessary Condition.- 6. The Simplest Problem in n Dimensions.- II The Optimal Control Problem.- 1. Introduction.- 2. Examples.- 3. Statement of the Optimal Control Problem.- 4. Equivalent Problems.- 5. Statement of Pontryagin's Principle.- 6. Extremals for the Moon Landing Problem.- 7. Extremals for the Linear Regulator Problem.- 8. Extremals for the Simplest Problem in Calculus of Variations.- 9. General Features of the Moon Landing Problem.- 10. Summary of Preliminary Results.- 11. The Free Terminal Point Problem.- 12. Preliminary Discussion of the Proof of Pontryagin's Principle.- 13. A Multiplier Rule for an Abstract Nonlinear Programming Problem.- 14. A Cone of Variations for the Problem of Optimal Control.- 15. Verification of Pontryagin's Principle.- III Existence and Continuity Properties of Optimal Controls.- 1. The Existence Problem.- 2. An Existence Theorem (Mayer Problem U Compact).- 3. Proof of Theorem 2.1.- 4. More Existence Theorems.- 5. Proof of Theorem 4.1.- 6. Continuity Properties of Optimal Controls.- IV Dynamic Programming.- 1. Introduction.- 2. The Problem.- 3. The Value Function.- 4. The Partial Differential Equation of Dynamic Programming.- 5. The Linear Regulator Problem.- 6. Equations of Motion with Discontinuous Feedback Controls.- 7. Sufficient Conditions for Optimality.- 8. The Relationship between the Equation of Dynamic Programming and Pontryagin's Principle.- V Stochastic Differential Equations and Markov Diffusion Processes.- 1. Introduction.- 2. Continuous Stochastic Processes Brownian Motion Processes.- 3. Ito's Stochastic Integral.- 4. Stochastic Differential Equations.- 5. Markov Diffusion Processes.- 6. Backward Equations.- 7. Boundary Value Problems.- 8. Forward Equations.- 9. Linear System Equations the Kalman-Bucy Filter.- 10. Absolutely Continuous Substitution of Probability Measures.- 11. An Extension of Theorems 5.1,5.2.- VI Optimal Control of Markov Diffusion Processes.- 1. Introduction.- 2. The Dynamic Programming Equation for Controlled Markov Processes.- 3. Controlled Diffusion Processes.- 4. The Dynamic Programming Equation for Controlled Diffusions a Verification Theorem.- 5. The Linear Regulator Problem (Complete Observations of System States).- 6. Existence Theorems.- 7. Dependence of Optimal Performance on y and ?.- 8. Generalized Solutions of the Dynamic Programming Equation.- 9. Stochastic Approximation to the Deterministic Control Problem.- 10. Problems with Partial Observations.- 11. The Separation Principle.- Appendices.- A. Gronwall-Bellman Inequality.- B. Selecting a Measurable Function.- C. Convex Sets and Convex Functions.- D. Review of Basic Probability.- E. Results about Parabolic Equations.- F. A General Position Lemma.

3,027 citations

Book
01 Jan 1979
TL;DR: In this paper, the authors propose extension theorems, Martingales, and Compactness, as well as the non-unique case of the Martingale problem, and some estimates on the transition probability functions.
Abstract: Preliminary Material: Extension Theorems, Martingales, and Compactness.- Markov Processes, Regularity of Their Sample Paths, and the Wiener Measure.- Parabolic Partial Differential Equations.- The Stochastic Calculus of Diffusion Theory.- Stochastic Differential Equations.- The Martingale Formulation.- Uniqueness.- Ito's Uniqueness and Uniqueness to the Martingale Problem.- Some Estimates on the Transition Probability Functions.- Explosion.- Limit Theorems.- The Non-Unique Case

2,626 citations

Book
01 Jan 1979
TL;DR: This classic in stochastic network modelling broke new ground when it was published in 1979, and it remains a superb introduction to reversibility and its applications thanks to the author's clear and easy-to-read style.
Abstract: This classic in stochastic network modelling broke new ground when it was published in 1979, and it remains a superb introduction to reversibility and its applications. The book concerns behaviour in equilibrium of vector stochastic processes or stochastic networks. When a stochastic network is reversible its analysis is greatly simplified, and the first chapter is devoted to a discussion of the concept of reversibility. The rest of the book focuses on the various applications of reversibility and the extent to which the assumption of reversibility can be relaxed without destroying the associated tractability. Now back in print for a new generation, this book makes enjoyable reading for anyone interested in stochastic processes thanks to the author's clear and easy-to-read style. Elementary probability is the only prerequisite and exercises are interspersed throughout.

2,480 citations

Journal ArticleDOI
TL;DR: In this article, Martingales et al. present an integral representation of point-processes and queues in a Markovian network of queues, based on the Stieltjes-Lebesgue integral calculus.
Abstract: I Martingales.- 1. Histories and Stopping Times.- 2. Martingales.- 3. Predictability.- 4. Square-Integrable Martingales.- References.- Solutions to Exercises, Chapter I.- II Point Processes, Queues, and Intensities.- 1. Counting Processes and Queues.- 2. Watanabe's Characterization.- 3. Stochastic Intensity, General Case.- 4. Predictable Intensities.- 5. Representation of Queues.- 6. Random Changes of Time.- 7. Cryptographic Point Processes.- References.- Solutions to Exercises, Chapter II.- III Integral Representation of Point-Process Martingales.- 1. The Structure of Internal Histories.- 2. Regenerative Form of the Intensity.- 3. The Representation Theorem.- 4. Hilbert-Space Theory of Poissonian Martingales.- 5. Useful Extensions.- References.- Solutions to Exercises, Chapter III.- IV Filtering.- 1. The Theory of Innovations.- 2. State Estimates for Queues and Markov Chains.- 3. Continuous States and Nontrivial Prehistory.- References.- Solutions to Exercises, Chapter IV.- V Flows in Markovian Networks of Queues.- 1. Single Station : The Historical Results and the Filtering Method.- 2. Jackson's Networks.- 3. Burke's Output Theorem for Networks.- 4. Cascades and Loops in Jackson's Networks.- 5. Independence and Poissonian Flows in Markov Chains.- References.- Solutions to Exercises, Chapter V.- VI Likelihood Ratios.- 1. Radon-Nikodym Derivatives and Tests of Hypotheses.- 2. Changes of Intensities "a la Girsanov".- 3. Filtering by the Method of the Probability of Reference.- 4. Applications.- 5. The Capacity of a Point-Process Channel.- 6. Detection Formula187 References189 Solutions to Exercises, Chapter VI.- VII Optimal Control.- 1. Modeling Intensity Controls.- 2. Dynamic Programming for Intensity Controls : Complete-Observation Case.- 3. Input Regulation. A Case Study in Impulsive Control.- 4. Attraction Controls.- 5. Existence via Likelihood Ratio.- References.- Solutions to Exercises, Chapter VII.- VIII Marked Point Processes.- 1. Counting Measure and Intensity Kernels.- 2. Martingale Representation and Filtering.- 3. Radon-Nikodym Derivatives.- 4. Towards a General Theory of Intensity.- References.- Solutions to Exercises, Chapter VIII.- A1 Background in Probability and Stochastic Processes.- 1. Introduction.- 2. Monotone Class Theorem.- 3. Random Variables.- 4. Expectations.- 5. Conditioning and Independence.- 6. Convergence.- 7. Stochastic Processes.- 8. Markov Processes.- References.- A2 Stopping Times and Point-Process Histories.- 1. Stopping Times.- 2. Changes of Time and Meyer-Dellacherie's Integration Formula.- 3. Point-Process Histories.- References.- A3 Wiener-Driven Dynamical Systems.- 1. Ito's Stochastic Integral.- 2. Square-Integrable Brownian Martingales.- 3. Girsanov's Theorem.- References.- A4 Stieltjes-Lebesgue Calculus.- 1. The Stieltjes-Lebesgue Integral.- 2. The Product and Exponential Formulas.- References.- General Bibliography.

1,363 citations