scispace - formally typeset
Search or ask a question
Author

Raymond Rishel

Other affiliations: Brown University
Bio: Raymond Rishel is an academic researcher from University of Kentucky. The author has contributed to research in topics: Optimal control & Markov process. The author has an hindex of 13, co-authored 37 publications receiving 3642 citations. Previous affiliations of Raymond Rishel include Brown University.

Papers
More filters
Book
17 Nov 1975
TL;DR: In this paper, the authors considered the problem of optimal control of Markov diffusion processes in the context of calculus of variations, and proposed a solution to the problem by using the Euler Equation Extremals.
Abstract: I The Simplest Problem in Calculus of Variations.- 1. Introduction.- 2. Minimum Problems on an Abstract Space-Elementary Theory.- 3. The Euler Equation Extremals.- 4. Examples.- 5. The Jacobi Necessary Condition.- 6. The Simplest Problem in n Dimensions.- II The Optimal Control Problem.- 1. Introduction.- 2. Examples.- 3. Statement of the Optimal Control Problem.- 4. Equivalent Problems.- 5. Statement of Pontryagin's Principle.- 6. Extremals for the Moon Landing Problem.- 7. Extremals for the Linear Regulator Problem.- 8. Extremals for the Simplest Problem in Calculus of Variations.- 9. General Features of the Moon Landing Problem.- 10. Summary of Preliminary Results.- 11. The Free Terminal Point Problem.- 12. Preliminary Discussion of the Proof of Pontryagin's Principle.- 13. A Multiplier Rule for an Abstract Nonlinear Programming Problem.- 14. A Cone of Variations for the Problem of Optimal Control.- 15. Verification of Pontryagin's Principle.- III Existence and Continuity Properties of Optimal Controls.- 1. The Existence Problem.- 2. An Existence Theorem (Mayer Problem U Compact).- 3. Proof of Theorem 2.1.- 4. More Existence Theorems.- 5. Proof of Theorem 4.1.- 6. Continuity Properties of Optimal Controls.- IV Dynamic Programming.- 1. Introduction.- 2. The Problem.- 3. The Value Function.- 4. The Partial Differential Equation of Dynamic Programming.- 5. The Linear Regulator Problem.- 6. Equations of Motion with Discontinuous Feedback Controls.- 7. Sufficient Conditions for Optimality.- 8. The Relationship between the Equation of Dynamic Programming and Pontryagin's Principle.- V Stochastic Differential Equations and Markov Diffusion Processes.- 1. Introduction.- 2. Continuous Stochastic Processes Brownian Motion Processes.- 3. Ito's Stochastic Integral.- 4. Stochastic Differential Equations.- 5. Markov Diffusion Processes.- 6. Backward Equations.- 7. Boundary Value Problems.- 8. Forward Equations.- 9. Linear System Equations the Kalman-Bucy Filter.- 10. Absolutely Continuous Substitution of Probability Measures.- 11. An Extension of Theorems 5.1,5.2.- VI Optimal Control of Markov Diffusion Processes.- 1. Introduction.- 2. The Dynamic Programming Equation for Controlled Markov Processes.- 3. Controlled Diffusion Processes.- 4. The Dynamic Programming Equation for Controlled Diffusions a Verification Theorem.- 5. The Linear Regulator Problem (Complete Observations of System States).- 6. Existence Theorems.- 7. Dependence of Optimal Performance on y and ?.- 8. Generalized Solutions of the Dynamic Programming Equation.- 9. Stochastic Approximation to the Deterministic Control Problem.- 10. Problems with Partial Observations.- 11. The Separation Principle.- Appendices.- A. Gronwall-Bellman Inequality.- B. Selecting a Measurable Function.- C. Convex Sets and Convex Functions.- D. Review of Basic Probability.- E. Results about Parabolic Equations.- F. A General Position Lemma.

3,027 citations

Journal ArticleDOI
TL;DR: In this paper, a stochastic minimom principle whose adjoints satisfy deterministic integral equations is defined and defined to be necessary and sufficient for optimality, and a deterministic optimality criterion is defined.
Abstract: -Control of stochastic differential equations of the form dot{x}=f^{r(t)}(t,x,u) in which r(t) is a fiie-state Markov p n m s is discussed Dynamic programming optimalityconditions are shown to be necessary and sufficient for oplimality. A stochastic minimom principle whose adjoints satisfy deterministic integral equations is defiied and shorn to be necessary and snffiaent for optimality.

106 citations

Journal ArticleDOI
TL;DR: In this article, a model for wear in which wear is a continuous increasing stochastic process is set up and optimal control problems for these models are posed and explicitly solved in one case.
Abstract: Models for wear in which wear is a continuous increasing stochastic process are set up. Optimal control problems for these models are posed and explicitly solved in one case. >

45 citations

Book ChapterDOI
01 Jan 1975
TL;DR: In this paper, the validity of optimality conditions analagous to the Pontryagin Maximum Principle for deterministic control problems is investigated for this type of stochastic process, and a minimum principle which involves the conditional jump rate, the conditional state jump distribution, system performance rate, and the conditional expectation of the remaining performance is obtained.
Abstract: In Queueing Theory and many other fields problems of control arise for stochastic processes with piecewise constant paths. In this paper the validity of optimality conditions analagous to the Pontryagin Maximum Principle for deterministic control problems is investigated for this type of stochastic process. A minimum principle which involves the conditional jump rate, the conditional state jump distribution, system performance rate, and the conditional expectation of the remaining performance is obtained. The conditional expectation of the remaining performance plays the role of the adjoint variables. This conditional expectation satisfies a type of integral equation and an infinite system of ordinary differential equations.

44 citations


Cited by
More filters
Book
01 Jan 1994
TL;DR: In this paper, the authors present a brief history of LMIs in control theory and discuss some of the standard problems involved in LMIs, such as linear matrix inequalities, linear differential inequalities, and matrix problems with analytic solutions.
Abstract: Preface 1. Introduction Overview A Brief History of LMIs in Control Theory Notes on the Style of the Book Origin of the Book 2. Some Standard Problems Involving LMIs. Linear Matrix Inequalities Some Standard Problems Ellipsoid Algorithm Interior-Point Methods Strict and Nonstrict LMIs Miscellaneous Results on Matrix Inequalities Some LMI Problems with Analytic Solutions 3. Some Matrix Problems. Minimizing Condition Number by Scaling Minimizing Condition Number of a Positive-Definite Matrix Minimizing Norm by Scaling Rescaling a Matrix Positive-Definite Matrix Completion Problems Quadratic Approximation of a Polytopic Norm Ellipsoidal Approximation 4. Linear Differential Inclusions. Differential Inclusions Some Specific LDIs Nonlinear System Analysis via LDIs 5. Analysis of LDIs: State Properties. Quadratic Stability Invariant Ellipsoids 6. Analysis of LDIs: Input/Output Properties. Input-to-State Properties State-to-Output Properties Input-to-Output Properties 7. State-Feedback Synthesis for LDIs. Static State-Feedback Controllers State Properties Input-to-State Properties State-to-Output Properties Input-to-Output Properties Observer-Based Controllers for Nonlinear Systems 8. Lure and Multiplier Methods. Analysis of Lure Systems Integral Quadratic Constraints Multipliers for Systems with Unknown Parameters 9. Systems with Multiplicative Noise. Analysis of Systems with Multiplicative Noise State-Feedback Synthesis 10. Miscellaneous Problems. Optimization over an Affine Family of Linear Systems Analysis of Systems with LTI Perturbations Positive Orthant Stabilizability Linear Systems with Delays Interpolation Problems The Inverse Problem of Optimal Control System Realization Problems Multi-Criterion LQG Nonconvex Multi-Criterion Quadratic Problems Notation List of Acronyms Bibliography Index.

11,085 citations

Journal ArticleDOI
TL;DR: This review focuses on model predictive control of constrained systems, both linear and nonlinear, and distill from an extensive literature essential principles that ensure stability to present a concise characterization of most of the model predictive controllers that have been proposed in the literature.

8,064 citations

Journal ArticleDOI
TL;DR: The notion of viscosity solutions of scalar fully nonlinear partial differential equations of second order provides a framework in which startling comparison and uniqueness theorems, existence theorem, and continuous dependence may now be proved by very efficient and striking arguments as discussed by the authors.
Abstract: The notion of viscosity solutions of scalar fully nonlinear partial differential equations of second order provides a framework in which startling comparison and uniqueness theorems, existence theorems, and theorems about continuous dependence may now be proved by very efficient and striking arguments. The range of important applications of these results is enormous. This article is a self-contained exposition of the basic theory of viscosity solutions

5,267 citations

Book
18 Dec 1992
TL;DR: In this paper, an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions is given, as well as a concise introduction to two-controller, zero-sum differential games.
Abstract: This book is intended as an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions. The authors approach stochastic control problems by the method of dynamic programming. The text provides an introduction to dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. A new Chapter X gives an introduction to the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets. Chapter VI of the First Edition has been completely rewritten, to emphasize the relationships between logarithmic transformations and risk sensitivity. A new Chapter XI gives a concise introduction to two-controller, zero-sum differential games. Also covered are controlled Markov diffusions and viscosity solutions of Hamilton-Jacobi-Bellman equations. The authors have tried, through illustrative examples and selective material, to connect stochastic control theory with other mathematical areas (e.g. large deviations theory) and with applications to engineering, physics, management, and finance. In this Second Edition, new material on applications to mathematical finance has been added. Concise introductions to risk-sensitive control theory, nonlinear H-infinity control and differential games are also included.

3,885 citations

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of providing incentives over time for an agent with constant absolute risk aversion, and find that the optimal compensation scheme is a linear function of a vector of accounts which count the number of times that each of the N kinds of observable events occurs.
Abstract: We consider the problem of providing incentives over time for an agent with constant absolute risk aversion. The optimal compensation scheme is found to be a linear function of a vector of N accounts which count the number of times that each of the N kinds of observable events occurs. The number N is independent of the number of time periods, so the accounts may entail substantial aggregation. In a continuous time version of the problem, the agent controls the drift rate of a vector of accounts that is subject to frequent, small random fluctuations. The solution is as if the problem were the static one in which the agent controls only the mean of a multivariate normal distribution and the principal is constrained to use a linear compensation rule. If the principal can observe only coarser linear aggregates, such as revenues, costs, or profits, the optimal compensation scheme is then a linear function of those aggregates. The combination of exponential utility, normal distributions, and linear compensation schemes makes computations and comparative statics easy to do, as we illustrate. We interpret our linearity results as deriving in part from the richness of the agent's strategy space, which makes it possible for the agent to undermine and exploit complicated, nonlinear functions of the accounting aggregates.

2,843 citations