scispace - formally typeset
Search or ask a question
Author

J. Michael Harrison

Bio: J. Michael Harrison is an academic researcher from Stanford University. The author has contributed to research in topics: Queueing theory & Heavy traffic approximation. The author has an hindex of 45, co-authored 86 publications receiving 15644 citations. Previous affiliations of J. Michael Harrison include University of Florida & University of Bristol.


Papers
More filters
01 Jan 2013
TL;DR: An open problem in stochastic network theory is presented, along with simulation results, a formal analysis, and a selective literature review that provide context and motivation to devise a policy for dynamic resource allocation that achieves what is called hierarchical greedy ideal (HGI) performance in the heavy traffic limit.
Abstract: This paper presents an open problem in stochastic network theory, along with simulation results, a formal analysis, and a selective literature review that provide context and motivation. We consider the resource sharing networks introduced by Massoulie and Roberts (2000) to model the dynamic behavior of internet flows; the open problem is to devise a policy for dynamic resource allocation that achieves what we call hierarchical greedy ideal (HGI) performance in the heavy traffic limit. The existence of such a policy is suggested by formal analysis of an approximating Brownian control problem, assuming that there is "local traffic" on each processing resource.

2 citations

Posted Content
TL;DR: In this article, the authors consider a firm that can use one of several costly learning modes to dynamically reduce uncertainty about the unknown value of a project, and show that the optimal learning policy is to choose the mode that has the smallest cost per signal quality.
Abstract: We consider a firm that can use one of several costly learning modes to dynamically reduce uncertainty about the unknown value of a project. Each learning mode incurs cost at a particular rate and provides information of a particular quality. In addition to dynamic decisions about its learning mode, the firm must decide when to stop learning and either invest or abandon the project. Using a continuous-time Bayesian framework, and assuming a binary prior distribution for the project’s unknown value, we solve both the discounted and undiscounted versions of this problem. In the undiscounted case, the optimal learning policy is to choose the mode that has the smallest cost per signal quality. When the discount rate is strictly positive, we prove that an optimal learning and investment policy can be summarized by a small number of critical values, and the firm only uses learning modes that lie on a certain convex envelope in cost-rate-versus-signal-quality space. We extend our analysis to consider a firm that can choose multiple learning modes simultaneously, which requires the analysis of both investment timing and dynamic subset selection decisions. We solve both the discounted and undiscounted versions of this problem, and explicitly identify sets of learning modes that are used under the optimal policy.

2 citations

Book ChapterDOI
01 Oct 2020

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, a simple discrete-time model for valuing options is presented, which is based on the Black-Scholes model, which has previously been derived only by much more difficult methods.

5,864 citations

Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Book
18 Dec 1992
TL;DR: In this paper, an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions is given, as well as a concise introduction to two-controller, zero-sum differential games.
Abstract: This book is intended as an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions. The authors approach stochastic control problems by the method of dynamic programming. The text provides an introduction to dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. A new Chapter X gives an introduction to the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets. Chapter VI of the First Edition has been completely rewritten, to emphasize the relationships between logarithmic transformations and risk sensitivity. A new Chapter XI gives a concise introduction to two-controller, zero-sum differential games. Also covered are controlled Markov diffusions and viscosity solutions of Hamilton-Jacobi-Bellman equations. The authors have tried, through illustrative examples and selective material, to connect stochastic control theory with other mathematical areas (e.g. large deviations theory) and with applications to engineering, physics, management, and finance. In this Second Edition, new material on applications to mathematical finance has been added. Concise introductions to risk-sensitive control theory, nonlinear H-infinity control and differential games are also included.

3,885 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations