scispace - formally typeset
Search or ask a question

Showing papers by "William E. Strawderman published in 2012"


Journal ArticleDOI
TL;DR: In this article, the authors provide a development that unifies, simplifies and extends considerably a number of minimax results in the restricted parameter space literature, such as estimating location or scale parameters under a lower (or upper) bound restriction.
Abstract: We provide a development that unifies, simplifies and extends considerably a number of minimax results in the restricted parameter space literature. Various applications follow, such as that of estimating location or scale parameters under a lower (or upper) bound restriction, location parameter vectors restricted to a polyhedral cone, scale parameters subject to restricted ratios or products, linear combinations of restricted location parameters, location parameters bounded to an interval with unknown scale, quantiles for location-scale families with parametric restrictions and restricted covariance matrices.

24 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a development that unifies, simplifies and extends considerably a number of minimax results in the restricted parameter space literature, such as estimating location or scale parameters under a lower (or upper) bound restriction.
Abstract: We provide a development that unifies, simplifies and extends considerably a number of minimax results in the restricted parameter space literature. Various applications follow, such as that of estimating location or scale parameters under a lower (or upper) bound restriction, location parameter vectors restricted to a polyhedral cone, scale parameters subject to restricted ratios or products, linear combinations of restricted location parameters, location parameters bounded to an interval with unknown scale, quantiles for location-scale families with parametric restrictions and restricted covariance matrices.

16 citations


Book ChapterDOI
TL;DR: In this article, the authors considered estimating the predictive density for a normal linear model with unknown variance under alpha-divergence loss for -1 <= alpha <= 1, and gave a general canonical form for the problem, and then gave general expressions for the generalized Bayes solution under the above loss for each alpha.
Abstract: This paper considers estimation of the predictive density for a normal linear model with unknown variance under alpha-divergence loss for -1 <= alpha <= 1. We first give a general canonical form for the problem, and then give general expressions for the generalized Bayes solution under the above loss for each alpha. For a particular class of hierarchical generalized priors studied in Maruyama and Strawderman (2005, 2006) for the problems of estimating the mean vector and the variance respectively, we give the generalized Bayes predictive density. Additionally, we show that, for a subclass of these priors, the resulting estimator dominates the generalized Bayes estimator with respect to the right invariant prior when alpha=1, i.e., the best (fully) equivariant minimax estimator.

15 citations


Journal ArticleDOI
TL;DR: In this paper, a review of advances in Stein-type shrinkage estima- tion for spherically symmetric distributions is presented, where the main focus is on distributional robustness results in cases where a residual vector is available to estimate an unknown scale parameter.
Abstract: This paper reviews advances in Stein-type shrinkage estima- tion for spherically symmetric distributions. Some emphasis is placed on developing intuition as to why shrinkage should work in location problems whether the underlying population is normal or not. Consid- erable attention is devoted to generalizing the "Stein lemma" which underlies much of the theoretical development of improved minimax estimation for spherically symmetric distributions. A main focus is on distributional robustness results in cases where a residual vector is available to estimate an unknown scale parameter, and, in particu- lar, in finding estimators which are simultaneously generalized Bayes and minimax over large classes of spherically symmetric distributions. Some attention is also given to the problem of estimating a location vector restricted to lie in a polyhedral cone.

11 citations


Journal ArticleDOI
TL;DR: In this paper, a special issue on minimax shrinkage estimation is devoted to developments that ultimately arose from Stein's investigations into improving on the UMVUE of a multivariate normal mean vector.
Abstract: In 1956, Charles Stein published an article that was to forever change the statistical approach to high-dimensional estimation. His stunning discovery that the usual estimator of the normal mean vector could be dominated in dimensions 3 and higher amazed many at the time, and became the catalyst for a vast and rich literature of substantial importance to statistical theory and practice. As a tribute to Charles Stein, this special issue on minimax shrinkage estimation is devoted to developments that ultimately arose from Stein’s investigations into improving on the UMVUE of a multivariate normal mean vector. Of course, much of the early literature on the subject was due to Stein himself, including a key technical lemma commonly referred to as Stein’s Lemma, which leads to an unbiased estimator of the risk of an almost arbitrary estimator of the mean vector. The following ten papers assembled in this volume represent some of the many areas into which shrinkage has expanded (a one-dimensional pun, no doubt). Clearly, the shrinkage literature has branched out substantially since 1956, the many contributors and the breadth of theory and practice being now far too large to cover with any degree of completeness in a review issue such as this one. But what these papers do show is the lasting impact of Stein (1956), and the ongoing vitality of the huge area that he catalyzed.

7 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the minimaxity of the generalized Bayes estimator under quadratic loss under the condition that the prior density is spherically symmetric and superharmonic.
Abstract: We consider Bayesian estimation of the location parameter θ of a random vector X having a unimodal spherically symmetric density f(‖x−θ‖2) when the prior density π(‖θ‖2) is spherically symmetric and superharmonic. We study minimaxity of the generalized Bayes estimator δπ(X)=X+∇M(X)/m(X) under quadratic loss, where m is the marginal associated to f(‖x−θ‖2) and M is the marginal with respect to F(‖x−θ‖2)=1/2∫‖x−θ‖2∞f(t) dt under the condition inf t≥0F(t)/f(t)=c>0 (see Berger [1]). We adopt a common approach to the cases where F(t)/f(t) is nonincreasing or nondecreasing and, although details differ in the two settings, this paper complements the article by Fourdrinier and Strawderman [7] who dealt with only the case where F(t)/f(t) is nondecreasing. When F(t)/f(t) is nonincreasing, we show that the Bayes estimator is minimax provided a ‖∇π(‖θ‖2)‖2/π(‖θ‖2)+2 c2 Δπ(‖θ‖2)≤0 where a is a constant depending on the sampling density. When F(t)/f(t) is nondecreasing, the first term of that inequality is replaced by b g(‖θ‖2) where b also depends on f and where g(‖θ‖2) is a superharmonic upper bound of ‖∇π(‖θ‖2)‖2/π(‖θ‖2). Examples illustrate the theory.

6 citations


Posted Content
TL;DR: In this paper, general and uni-form conditions for the minimaxity of the best equivariant predictive density estimator are derived for various restricted parameter spaces in location and/or scale families.
Abstract: This paper is concerned with estimation of a predictive density with parametric constraints under Kullback-Leibler loss. When an invariance structure is embed- ded in the problem, general and uni ed conditions for the minimaxity of the best equivariant predictive density estimator are derived. These conditions are applied to check minimaxity in various restricted parameter spaces in location and/or scale families. Further, it is shown that the generalized Bayes estimator against the uni- form prior over the restricted space is minimax and dominates the best equivariant estimator in a location family when the parameter is restricted to an interval of the form [a0;1). Similar ndings are obtained for scale parameter families. Finally, the presentation is accompanied by various observations and illustrations, such as normal, exponential location, and gamma model examples.

6 citations


Posted Content
TL;DR: In this article, a class of Bayesian confidence intervals with credibility $1-α$ and frequentist coverage probability bounded below by α is presented. But the confidence intervals are not defined.
Abstract: For estimating a lower bounded parametric function in the framework of Marchand and Strawderman (2006), we provide through a unified approach a class of Bayesian confidence intervals with credibility $1-\alpha$ and frequentist coverage probability bounded below by $\frac{1-\alpha}{1+\alpha}$. In cases where the underlying pivotal distribution is symmetric, the findings represent extensions with respect to the specification of the credible set achieved through the choice of a {\it spending function}, and include Marchand and Strawderman's HPD procedure result. For non-symmetric cases, the determination of a such a class of Bayesian credible sets fills a gap in the literature and includes an "equal-tails" modification of the HPD procedure. Several examples are presented demonstrating wide applicability.

5 citations


Journal ArticleDOI
TL;DR: In this article, a matrix version of the James-Stein estimator is proposed, depending on a tuning constant a. This result also extends to other shrinkage estimators and settings.
Abstract: Consider estimating an n×p matrix of means Θ, say, from an n×p matrix of observations X, where the elements of X are assumed to be independently normally distributed with E(xij)=θij and constant variance, and where the performance of an estimator is judged using a p×p matrix quadratic error loss function. A matrix version of the James-Stein estimator is proposed, depending on a tuning constant a. It is shown to dominate the usual maximum likelihood estimator for some choices of a when n≥3. This result also extends to other shrinkage estimators and settings.

5 citations


Posted Content
TL;DR: For normal canonical models, and more generally a vast array of general spherically symmetric location-scale models with a residual vector, the authors consider estimating the (univariate) location parameter when it is lower bounded.
Abstract: For normal canonical models, and more generally a vast array of general spherically symmetric location-scale models with a residual vector, we consider estimating the (univariate) location parameter when it is lower bounded. We provide conditions for estimators to dominate the benchmark minimax MRE estimator, and thus be minimax under scale invariant loss. These minimax estimators include the generalized Bayes estimator with respect to the truncation of the common non-informative prior onto the restricted parameter space for normal models under general convex symmetric loss, as well as non-normal models under scale invariant $L^p$ loss with $p>0$. We cover many other situations when the loss is asymmetric, and where other generalized Bayes estimators, obtained with different powers of the scale parameter in the prior measure, are proven to be minimax. We rely on various novel representations, sharp sign change analyses, as well as capitalize on Kubokawa's integral expression for risk difference technique. Several other analytical properties are obtained, including a robustness property of the generalized Bayes estimators above when the loss is either scale invariant $L^p$ or asymmetrized versions. Applications include inference in two-sample normal model with order constraints on the means.

4 citations


Posted Content
TL;DR: This paper reviewed advances in Stein-type shrinkage estimation for spherically symmetric distributions and developed intuition as to why shrinkage should work in location problems whether the underlying population is normal or not.
Abstract: This paper reviews advances in Stein-type shrinkage estimation for spherically symmetric distributions. Some emphasis is placed on developing intuition as to why shrinkage should work in location problems whether the underlying population is normal or not. Considerable attention is devoted to generalizing the “Stein lemma” which underlies much of the theoretical development of improved minimax estimation for spherically symmetric distributions. A main focus is on distributional robustness results in cases where a residual vector is available to estimate an unknown scale parameter, and, in particular, in finding estimators which are simultaneously generalized Bayes and minimax over large classes of spherically symmetric distributions. Some attention is also given to the problem of estimating a location vector restricted to lie in a polyhedral cone.

Posted Content
TL;DR: The authors study probit regression from a Bayesian perspective and give an alternative form for the posterior distribution when the prior distribution for the regression parameters is the uniform distribution, which may therefore be more efficient computationally.
Abstract: We study probit regression from a Bayesian perspective and give an alternative form for the posterior distribution when the prior distribution for the regression parameters is the uniform distribution. This new form allows simple Monte Carlo simulation of the posterior as opposed to MCMC simulation studied in much of the literature and may therefore be more efficient computationally. We also provide alternative explicit expression for the first and second moments. Additionally we provide analogous results for Gaussian priors.

Journal ArticleDOI
TL;DR: The authors reviewed advances in Stein-type shrinkage estimation for spherically symmetric distributions and developed intuition as to why shrinkage should work in location problems whether the underlying population is normal or not.
Abstract: This paper reviews advances in Stein-type shrinkage estimation for spherically symmetric distributions. Some emphasis is placed on developing intuition as to why shrinkage should work in location problems whether the underlying population is normal or not. Considerable attention is devoted to generalizing the "Stein lemma" which underlies much of the theoretical development of improved minimax estimation for spherically symmetric distributions. A main focus is on distributional robustness results in cases where a residual vector is available to estimate an unknown scale parameter, and, in particular, in finding estimators which are simultaneously generalized Bayes and minimax over large classes of spherically symmetric distributions. Some attention is also given to the problem of estimating a location vector restricted to lie in a polyhedral cone.

Journal ArticleDOI
TL;DR: In this article, a special issue on minimax shrinkage estimation is devoted to developments that ultimately arose from Stein's investigations into improving on the UMVUE of a multivariate normal mean vector.
Abstract: In 1956, Charles Stein published an article that was to forever change the statistical approach to high-dimensional estimation. His stunning discovery that the usual estimator of the normal mean vector could be dominated in dimensions 3 and higher amazed many at the time, and became the catalyst for a vast and rich literature of substantial importance to statistical theory and practice. As a tribute to Charles Stein, this special issue on minimax shrinkage estimation is devoted to developments that ultimately arose from Stein's investigations into improving on the UMVUE of a multivariate normal mean vector. Of course, much of the early literature on the subject was due to Stein himself, including a key technical lemma commonly referred to as Stein's Lemma, which leads to an unbiased estimator of the risk of an almost arbitrary estimator of the mean vector.