scispace - formally typeset
Search or ask a question
Author

Michael Woodroofe

Bio: Michael Woodroofe is an academic researcher from University of Michigan. The author has contributed to research in topics: Estimator & Central limit theorem. The author has an hindex of 40, co-authored 143 publications receiving 5957 citations. Previous affiliations of Michael Woodroofe include University of Cambridge & Case Western Reserve University.


Papers
More filters
Book
01 Jan 1987
TL;DR: Randomly Stopped Sequences Random Walks The Sequential Probability Ratio Test Nonlinear Renewal Theory Local Limit Theorems Open-Ended Tests Repeated Significance Tests Multiparameter Problems Estimation Following Sequential Testing Sequential Estimation as mentioned in this paper.
Abstract: Randomly Stopped Sequences Random Walks The Sequential Probability Ratio Test Nonlinear Renewal Theory Local Limit Theorems Open-Ended Tests Repeated Significance Tests Multiparameter Problems Estimation Following Sequential Testing Sequential Estimation.

526 citations

Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of estimating the distribution of a Gaussian process in a finite population with unknown distribution functions and derived the nonparametric maximum likelihood estimators (MLEs) of the MLEs.
Abstract: Let $\mathscr{P}$ be a finite population with $N \geq 1$ elements; for each $e \in \mathscr{P}$, let $X_e$ and $Y_e$ be independent, positive random variables with unknown distribution functions $F$ and $G$; and suppose that the pairs $(X_e, Y_e)$ are iid We consider the problem of estimating $F, G$, and $N$ when the data consist of those pairs $(X_e, Y_e)$ for which $e \in \mathscr{P}$ and $Y_e \leq X_e$ The nonparametric maximum likelihood estimators (MLEs) of $F$ and $G$ are described; and their asymptotic properties as $N \rightarrow \infty$ are derived It is shown that the MLEs are consistent against pairs $(F, G)$ for which $F$ and $G$ are continuous, $G^{-1}(0) \leq F^{-1}(0)$, and $G^{-1}(1) \leq F^{-1}(1) \sqrt N \times$ estimation error for $F$ converges in distribution to a Gaussian process if $\int^\infty_0 (1/G) dF < \infty$, but may fail to converge if this integral is infinite

474 citations

Journal ArticleDOI
TL;DR: In this paper, the authors measured 8394 line-of-sight velocities (± 2.5 km s-1) for 6804 stars from high-resolution spectra obtained at the Magellan and MMT telescopes.
Abstract: We present stellar velocity dispersion profiles for seven Milky Way dwarf spheroidal (dSph) satellite galaxies. We have measured 8394 line-of-sight velocities (±2.5 km s-1) for 6804 stars from high-resolution spectra obtained at the Magellan and MMT telescopes. We combine these new data with previously published velocities to obtain the largest available kinematic samples, which include more than 5500 dSph members. All the measured dSphs have stellar velocity dispersion of order 10 km s-1 that remains approximately constant with distance from the dSph center, out to and in some cases beyond the radius at which the mean surface brightness falls to the background level. Assuming dSphs reside within dark matter halos characterized by the NFW density profile, we obtain reasonable fits to the empirical velocity dispersion profiles. These fits imply that, among the seven dSphs, Mvir ~ 108-109 M☉. The mass enclosed at a radius of 600 pc, the region common to all data sets, lies in the range (2-7) × 107 M☉.

328 citations

Journal ArticleDOI
TL;DR: In this paper, the authors find the asymptotic distribution of the excess $R_c = ct_c^\alpha -S_n 1, L(n)$ as $c \rightarrow 0$ and apply their results to obtain second order approximations to the expected sample size and risk of some sequential procedures for estimation.
Abstract: Several stopping times which arise from problems of sequential estimation may be written in the form $t_c = \inf\{n \geqq m: S_n 1, L(n)$ is a convergent sequence, and $c$ is a positive parameter which is often allowed to approach zero. In this paper we find the asymptotic distribution of the excess $R_c = ct_c^\alpha - S_{t_c}$ as $c \rightarrow 0$ and use it to obtain sharp estimates for $E\{t_c\}.$ We then apply our results to obtain second order approximations to the expected sample size and risk of some sequential procedures for estimation.

264 citations

Journal ArticleDOI
TL;DR: In this article, central limit theorems and invariance principles are obtained for additive functionals of a stationary ergodic Markov chain, where the conditions imposed restrict the moments of $g$ and the growth of the conditional means.
Abstract: Central limit theorems and invariance principles are obtained for additive functionals of a stationary ergodic Markov chain, say $S_n = g(X_1)+ \cdots + g(X_n)$ where $E[g(X_1)]= 0$ and $E[g(X_1)^2]<\infty$. The conditions imposed restrict the moments of $g$ and the growth of the conditional means $E(S_n|X_1)$. No other restrictions on the dependence structure of the chain are required. When specialized to shift processes,the conditions are implied by simple integral tests involving $g$.

260 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations

Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Journal ArticleDOI
David J. Thomson1
01 Sep 1982
TL;DR: In this article, a local eigenexpansion is proposed to estimate the spectrum of a stationary time series from a finite sample of the process, which is equivalent to using the weishted average of a series of direct-spectrum estimates based on orthogonal data windows to treat both bias and smoothing problems.
Abstract: In the choice of an estimator for the spectrum of a stationary time series from a finite sample of the process, the problems of bias control and consistency, or "smoothing," are dominant. In this paper we present a new method based on a "local" eigenexpansion to estimate the spectrum in terms of the solution of an integral equation. Computationally this method is equivalent to using the weishted average of a series of direct-spectrum estimates based on orthogonal data windows (discrete prolate spheroidal sequences) to treat both the bias and smoothing problems. Some of the attractive features of this estimate are: there are no arbitrary windows; it is a small sample theory; it is consistent; it provides an analysis-of-variance test for line components; and it has high resolution. We also show relations of this estimate to maximum-likelihood estimates, show that the estimation capacity of the estimate is high, and show applications to coherence and polyspectrum estimates.

3,921 citations

Journal ArticleDOI
TL;DR: In this article, the entropy-based information criterion (AIC) has been extended in two ways without violating Akaike's main principles: CAIC and CAICF, which make AIC asymptotically consistent and penalize overparameterization more stringently.
Abstract: During the last fifteen years, Akaike's entropy-based Information Criterion (AIC) has had a fundamental impact in statistical model evaluation problems. This paper studies the general theory of the AIC procedure and provides its analytical extensions in two ways without violating Akaike's main principles. These extensions make AIC asymptotically consistent and penalize overparameterization more stringently to pick only the simplest of the “true” models. These selection criteria are called CAIC and CAICF. Asymptotic properties of AIC and its extensions are investigated, and empirical performances of these criteria are studied in choosing the correct degree of a polynomial model in two different Monte Carlo experiments under different conditions.

3,850 citations