scispace - formally typeset
Search or ask a question
Author

Louis Gordon

Bio: Louis Gordon is an academic researcher from University of Southern California. The author has contributed to research in topics: Poisson distribution & Nonparametric statistics. The author has an hindex of 19, co-authored 25 publications receiving 2568 citations. Previous affiliations of Louis Gordon include United States Department of Energy & University of California, San Diego.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors presented Chen's results in a form that is easy to use and gave a multivariable extension, which gives an upper bound on the total variation distance between a sequence of dependent indicator functions and a Poisson process with the same intensity.
Abstract: Convergence to the Poisson distribution, for the number of occurrences of dependent events, can often be established by computing only first and second moments, but not higher ones. This remarkable result is due to Chen (1975). The method also provides an upper bound on the total variation distance to the Poisson distribution, and succeeds in cases where third and higher moments blow up. This paper presents Chen's results in a form that is easy to use and gives a multivariable extension, which gives an upper bound on the total variation distance between a sequence of dependent indicator functions and a Poisson process with the same intensity. A corollary of this is an upper bound on the total variation distance between a sequence of dependent indicator variables and the process having the same marginals but independent coordinates.

522 citations

Journal ArticleDOI
TL;DR: The Chen-Stein method of Poisson approximation is a powerful tool for computing an error bound when approximating probabilities using the Poisson distribution as discussed by the authors, in many cases, this bound may be given in terms of first and second moments alone.
Abstract: The Chen-Stein method of Poisson approximation is a powerful tool for computing an error bound when approximating probabilities using the Poisson distribution. In many cases, this bound may be given in terms of first and second moments alone. We present a background of the method and state some fundamental Poisson approximation theorems. The body of this paper is an illustration, through varied examples, of the wide applicability and utility of the Chen-Stein method. These examples include birthday coincidences, head runs in coin tosses, random graphs, maxima of normal variates and random permutations and mappings. We conclude with an application to molecular biology. The variety of examples presented here does not exhaust the range of possible applications of the Chen-Stein method.

333 citations

Journal Article
TL;DR: In this note, tree-structured recursive partitioning schemes for classification, probability class estimation, and regression are adapted to cover censored survival analysis, applicable to more general situations than are those of the famous semi-parametric model of Cox.
Abstract: In this note, tree-structured recursive partitioning schemes for classification, probability class estimation, and regression are adapted to cover censored survival analysis. The only assumptions required are those which guarantee identifiability of conditional distributions of lifetime given covariates. Thus, the techniques are applicable to more general situations than are those of the famous semi-parametric model of Cox.

279 citations

01 Jan 2013
TL;DR: The Chen-Stein method of Poisson approximation is a powerful tool for computing an error bound when approximating probabilities using the Poisson distribution as mentioned in this paper, in many cases, this bound may be given in terms of first and second moments alone.
Abstract: The Chen-Stein method of Poisson approximation is a powerful tool for computing an error bound when approximating probabilities using the Poisson distribution. In many cases, this bound may be given in terms of first and second moments alone. We present a background of the method and state some fundamental Poisson approximation theorems. The body of this paper is an illustration, through varied examples, of the wide applica- bility and utility of the Chen-Stein method. These examples include birth- day coincidences, head runs in coin tosses, random graphs, maxima of normal variates and random permutations and mappings. We conclude with an application to molecular biology. The variety of examples presented here does not exhaust the range of possible applications of the Chen-Stein method.

277 citations

Journal ArticleDOI
TL;DR: The large deviation theory of the binomial distribution is presented, in an easy to use form, how to approximate the probability of k or more successes in n independent trials, each with success probability p, when the specified fraction of successes satisfies 0 less than p less than a less than 1.

169 citations


Cited by
More filters
Book
01 Jan 1996
TL;DR: The Bayes Error and Vapnik-Chervonenkis theory are applied as guide for empirical classifier selection on the basis of explicit specification and explicit enforcement of the maximum likelihood principle.
Abstract: Preface * Introduction * The Bayes Error * Inequalities and alternatedistance measures * Linear discrimination * Nearest neighbor rules *Consistency * Slow rates of convergence Error estimation * The regularhistogram rule * Kernel rules Consistency of the k-nearest neighborrule * Vapnik-Chervonenkis theory * Combinatorial aspects of Vapnik-Chervonenkis theory * Lower bounds for empirical classifier selection* The maximum likelihood principle * Parametric classification *Generalized linear discrimination * Complexity regularization *Condensed and edited nearest neighbor rules * Tree classifiers * Data-dependent partitioning * Splitting the data * The resubstitutionestimate * Deleted estimates of the error probability * Automatickernel rules * Automatic nearest neighbor rules * Hypercubes anddiscrete spaces * Epsilon entropy and totally bounded sets * Uniformlaws of large numbers * Neural networks * Other error estimates *Feature extraction * Appendix * Notation * References * Index

3,598 citations

Journal ArticleDOI
TL;DR: In this article, Modelling Extremal Events for Insurance and Finance is discussed. But the authors focus on the modeling of extreme events for insurance and finance, and do not consider the effects of cyber-attacks.
Abstract: (2002). Modelling Extremal Events for Insurance and Finance. Journal of the American Statistical Association: Vol. 97, No. 457, pp. 360-360.

2,729 citations

Journal ArticleDOI
TL;DR: An analysis of tumor/patient characteristics and treatment variables in previous Radiation Therapy Oncology Group (RTOG) brain metastases studies was considered necessary to fully evaluate the benefit of these new interventions.
Abstract: Promising results from new approaches such as radiosurgery or stereotactic radiosurgery of brain metastases have recently been reported. Are these results due to the therapy alone or can the results be attributed in part to patient selection? An analysis of tumor/patient characteristics and treatment variables in previous RTOG brain metastases studies was considered necessary to fully evaluate the benefit of these new interventions.

2,330 citations

Proceedings Article
02 Aug 1996
TL;DR: A new algorithm, NBTree, is proposed, which induces a hybrid of decision-tree classifiers and Naive-Bayes classifiers: the decision-Tree nodes contain univariate splits as regular decision-trees, but the leaves contain Naïve-Bayesian classifiers.
Abstract: Naive-Bayes induction algorithms were previously shown to be surprisingly accurate on many classification tasks even when the conditional independence assumption on which they are based is violated. However, most studies were done on small databases. We show that in some larger databases, the accuracy of Naive-Bayes does not scale up as well as decision trees. We then propose a new algorithm, NBTree, which induces a hybrid of decision-tree classifiers and Naive-Bayes classifiers: the decision-tree nodes contain univariate splits as regular decision-trees, but the leaves contain Naive-Bayesian classifiers. The approach retains the interpretability of Naive-Bayes and decision trees, while resulting in classifiers that frequently outperform both constituents, especially in the larger databases tested.

1,667 citations