scispace - formally typeset
Search or ask a question
Author

Lester E. Dubins

Bio: Lester E. Dubins is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Martingale (probability theory) & Probability measure. The author has an hindex of 26, co-authored 76 publications receiving 6479 citations. Previous affiliations of Lester E. Dubins include University of California & University of Minnesota.


Papers
More filters
Journal ArticleDOI
TL;DR: The object here is to prove that the algorithm for assigning students to universities gives each student the best university available in a stable system of assignments.
Abstract: Gale and Shapley have an algorithm for assigning students to universities which gives each student the best university available in a stable system of assignments. The object here is to prove that ...

632 citations

Journal ArticleDOI
TL;DR: In this article, how to cut a cake fairly is discussed. But it is not a fair cake cutting procedure, and it is difficult to find a suitable cake cutter.
Abstract: (1961). How to Cut a Cake Fairly. The American Mathematical Monthly: Vol. 68, No. 1P1, pp. 1-17.

364 citations

Journal ArticleDOI
TL;DR: For any finite additive probability measure to be disintegrable, that is, to be an average with respect to some marginal distribution of a system of finitely additive conditional probabilities, it suffices, and is plainly necessary, that the measure be conglomerative, that there be a conditional expectation such that the expectation of no random variable can be negative if that random variable's conditional expectation given each of the marginal events is nonnegative.
Abstract: For any finitely additive probability measure to be disintegrable, that is, to be an average with respect to some marginal distribution of a system of finitely additive conditional probabilities, it suffices, and is plainly necessary, that the measure be conglomerative, that is, that there be a conditional expectation such that the expectation of no random variable can be negative if that random variable's conditional expectation given each of the marginal events is nonnegative. With respect to some margins, that is, partitions, there are finitely additive probability measures that are so far from being disintegrable that they cannot be approximated in the total variation norm by those that are. Those partitions which have this property are determined. Many partially defined conditional probabilities, and in particular, all disintegrations, or, equivalently, strategies, are restrictions of full conditional probabilities $Q = Q(A \mid B)$ defined for all pairs of events $A$ and $B$ with $B$ non-null.

276 citations


Cited by
More filters
01 Jan 1967
TL;DR: The k-means algorithm as mentioned in this paper partitions an N-dimensional population into k sets on the basis of a sample, which is a generalization of the ordinary sample mean, and it is shown to give partitions which are reasonably efficient in the sense of within-class variance.
Abstract: The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = {S1, S2, * *, Sk} is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special

24,320 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations

Book
Rick Durrett1
01 Jan 1990
TL;DR: In this paper, a comprehensive introduction to probability theory covering laws of large numbers, central limit theorem, random walks, martingales, Markov chains, ergodic theorems, and Brownian motion is presented.
Abstract: This book is an introduction to probability theory covering laws of large numbers, central limit theorems, random walks, martingales, Markov chains, ergodic theorems, and Brownian motion. It is a comprehensive treatment concentrating on the results that are the most useful for applications. Its philosophy is that the best way to learn probability is to see it in action, so there are 200 examples and 450 problems.

5,168 citations

Journal ArticleDOI
TL;DR: In this article, a class of prior distributions, called Dirichlet process priors, is proposed for nonparametric problems, for which treatment of many non-parametric statistical problems may be carried out, yielding results that are comparable to the classical theory.
Abstract: The Bayesian approach to statistical problems, though fruitful in many ways, has been rather unsuccessful in treating nonparametric problems. This is due primarily to the difficulty in finding workable prior distributions on the parameter space, which in nonparametric ploblems is taken to be a set of probability distributions on a given sample space. There are two desirable properties of a prior distribution for nonparametric problems. (I) The support of the prior distribution should be large--with respect to some suitable topology on the space of probability distributions on the sample space. (II) Posterior distributions given a sample of observations from the true probability distribution should be manageable analytically. These properties are antagonistic in the sense that one may be obtained at the expense of the other. This paper presents a class of prior distributions, called Dirichlet process priors, broad in the sense of (I), for which (II) is realized, and for which treatment of many nonparametric statistical problems may be carried out, yielding results that are comparable to the classical theory. In Section 2, we review the properties of the Dirichlet distribution needed for the description of the Dirichlet process given in Section 3. Briefly, this process may be described as follows. Let $\mathscr{X}$ be a space and $\mathscr{A}$ a $\sigma$-field of subsets, and let $\alpha$ be a finite non-null measure on $(\mathscr{X}, \mathscr{A})$. Then a stochastic process $P$ indexed by elements $A$ of $\mathscr{A}$, is said to be a Dirichlet process on $(\mathscr{X}, \mathscr{A})$ with parameter $\alpha$ if for any measurable partition $(A_1, \cdots, A_k)$ of $\mathscr{X}$, the random vector $(P(A_1), \cdots, P(A_k))$ has a Dirichlet distribution with parameter $(\alpha(A_1), \cdots, \alpha(A_k)). P$ may be considered a random probability measure on $(\mathscr{X}, \mathscr{A})$, The main theorem states that if $P$ is a Dirichlet process on $(\mathscr{X}, \mathscr{A})$ with parameter $\alpha$, and if $X_1, \cdots, X_n$ is a sample from $P$, then the posterior distribution of $P$ given $X_1, \cdots, X_n$ is also a Dirichlet process on $(\mathscr{X}, \mathscr{A})$ with a parameter $\alpha + \sum^n_1 \delta_{x_i}$, where $\delta_x$ denotes the measure giving mass one to the point $x$. In Section 4, an alternative definition of the Dirichlet process is given. This definition exhibits a version of the Dirichlet process that gives probability one to the set of discrete probability measures on $(\mathscr{X}, \mathscr{A})$. This is in contrast to Dubins and Freedman [2], whose methods for choosing a distribution function on the interval [0, 1] lead with probability one to singular continuous distributions. Methods of choosing a distribution function on [0, 1] that with probability one is absolutely continuous have been described by Kraft [7]. The general method of choosing a distribution function on [0, 1], described in Section 2 of Kraft and van Eeden [10], can of course be used to define the Dirichlet process on [0, 1]. Special mention must be made of the papers of Freedman and Fabius. Freedman [5] defines a notion of tailfree for a distribution on the set of all probability measures on a countable space $\mathscr{X}$. For a tailfree prior, posterior distribution given a sample from the true probability measure may be fairly easily computed. Fabius [3] extends the notion of tailfree to the case where $\mathscr{X}$ is the unit interval [0, 1], but it is clear his extension may be made to cover quite general $\mathscr{X}$. With such an extension, the Dirichlet process would be a special case of a tailfree distribution for which the posterior distribution has a particularly simple form. There are disadvantages to the fact that $P$ chosen by a Dirichlet process is discrete with probability one. These appear mainly because in sampling from a $P$ chosen by a Dirichlet process, we expect eventually to see one observation exactly equal to another. For example, consider the goodness-of-fit problem of testing the hypothesis $H_0$ that a distribution on the interval [0, 1] is uniform. If on the alternative hypothesis we place a Dirichlet process prior with parameter $\alpha$ itself a uniform measure on [0, 1], and if we are given a sample of size $n \geqq 2$, the only nontrivial nonrandomized Bayes rule is to reject $H_0$ if and only if two or more of the observations are exactly equal. This is really a test of the hypothesis that a distribution is continuous against the hypothesis that it is discrete. Thus, there is still a need for a prior that chooses a continuous distribution with probability one and yet satisfies properties (I) and (II). Some applications in which the possible doubling up of the values of the observations plays no essential role are presented in Section 5. These include the estimation of a distribution function, of a mean, of quantiles, of a variance and of a covariance. A two-sample problem is considered in which the Mann-Whitney statistic, equivalent to the rank-sum statistic, appears naturally. A decision theoretic upper tolerance limit for a quantile is also treated. Finally, a hypothesis testing problem concerning a quantile is shown to yield the sign test. In each of these problems, useful ways of combining prior information with the statistical observations appear. Other applications exist. In his Ph. D. dissertation [1], Charles Antoniak finds a need to consider mixtures of Dirichlet processes. He treats several problems, including the estimation of a mixing distribution, bio-assay, empirical Bayes problems, and discrimination problems.

5,033 citations

Book
01 Jan 1997
TL;DR: In this article, the authors discuss the relationship between Markov Processes and Ergodic properties of Markov processes and their relation with PDEs and potential theory. But their main focus is on the convergence of random processes, measures, and sets.
Abstract: * Measure Theory-Basic Notions * Measure Theory-Key Results * Processes, Distributions, and Independence * Random Sequences, Series, and Averages * Characteristic Functions and Classical Limit Theorems * Conditioning and Disintegration * Martingales and Optional Times * Markov Processes and Discrete-Time Chains * Random Walks and Renewal Theory * Stationary Processes and Ergodic Theory * Special Notions of Symmetry and Invariance * Poisson and Pure Jump-Type Markov Processes * Gaussian Processes and Brownian Motion * Skorohod Embedding and Invariance Principles * Independent Increments and Infinite Divisibility * Convergence of Random Processes, Measures, and Sets * Stochastic Integrals and Quadratic Variation * Continuous Martingales and Brownian Motion * Feller Processes and Semigroups * Ergodic Properties of Markov Processes * Stochastic Differential Equations and Martingale Problems * Local Time, Excursions, and Additive Functionals * One-Dimensional SDEs and Diffusions * Connections with PDEs and Potential Theory * Predictability, Compensation, and Excessive Functions * Semimartingales and General Stochastic Integration * Large Deviations * Appendix 1: Advanced Measure Theory * Appendix 2: Some Special Spaces * Historical and Bibliographical Notes * Bibliography * Indices

4,562 citations