scispace - formally typeset
Search or ask a question

Showing papers by "William E. Strawderman published in 2018"


Book ChapterDOI
01 Jan 2018
TL;DR: In this article, it was shown that the usual estimator of a location vector could be improved upon quite generally for p ≥ 3 and Brown (1966) substantially extended this conclusion to essentially arbitrary loss functions.
Abstract: In the previous chapters, estimation problems were considered for the normal distribution setting. Stein (1956) showed that the usual estimator of a location vector could be improved upon quite generally for p ≥ 3 and Brown (1966) substantially extended this conclusion to essentially arbitrary loss functions. Explicit results of the James-Stein type, however, have thus far been restricted to the case of the normal distribution. Recall the geometrical insight from Sect. 2.2.2, the development did not depend on the normality of X or even that θ is a location vector – this suggests that the improvement for Stein-type estimators may hold for more general distributions. Strawderman (1974a) first explored such an extension and considered estimation of the location parameter for scale mixtures of multivariate normal distributions.

2 citations


Book ChapterDOI
01 Jan 2018
TL;DR: In this paper, the authors extended the discussion to spherically symmetric distributions discussed in Chapter! 4.5 and discussed the domination results for Baranchik type estimators and Bayes minimax estimators.
Abstract: In Chapters 2 and 3 we studied estimators that improve over the “usual” estimator of the location vector for the case of a normal distribution. In this chapter, we extend the discussion to spherically symmetric distributions discussed in Chapter! 4. Section 5.2 is devoted to a discussion of domination results for Baranchik type estimators while Section 5.3 examines more general estimators. Section 5.4 discusses Bayes minimax estimation. Finally, Section 5.5 discusses estimation with a concave loss.

2 citations


Book ChapterDOI
01 Jan 2018
TL;DR: In this article, the authors give an overview of statistical and decision theoretic concepts and results that will be used throughout the book, often without proof, some results in Bayesian decision theory, minimaxity, admissibility, invariance, and general linear models.
Abstract: In this chapter we give an overview of statistical and decision theoretic concepts and results that will be used throughout the book. We assume that the reader is familiar with the basic statistical notions of parametric families of distributions, likelihood functions, maximum likelihood estimation, sufficiency, completeness and unbiasedness at the level of, for example, Casella and Berger (2001), Shao (2003), or Bickel and Doksum (2001). In the following, we will discuss, often without proof, some results in Bayesian decision theory, minimaxity, admissibility, invariance, and general linear models.

2 citations


Posted Content
TL;DR: In this paper, the authors considered the problem of obtaining a minimum risk equivariant density for spherically symmetric distributed distributions with unknown parameters, and showed that the Bayes predictive density with respect to the harmonic prior dominates for all scale mixture of normals subject to finite moments and finite risk conditions.
Abstract: Let $X,U,Y$ be spherically symmetric distributed having density $$\eta^{d +k/2} \, f\left(\eta(\|x-\theta|^2+ \|u\|^2 + \|y-c\theta\|^2 ) \right)\,,$$ with unknown parameters $\theta \in \mathbb{R}^d$ and $\eta>0$, and with known density $f$ and constant $c >0$. Based on observing $X=x,U=u$, we consider the problem of obtaining a predictive density $\hat{q}(y;x,u)$ for $Y$ as measured by the expected Kullback-Leibler loss. A benchmark procedure is the minimum risk equivariant density $\hat{q}_{mre}$, which is Generalized Bayes with respect to the prior $\pi(\theta, \eta) = \eta^{-1}$. For $d \geq 3$, we obtain improvements on $\hat{q}_{mre}$, and further show that the dominance holds simultaneously for all $f$ subject to finite moments and finite risk conditions. We also obtain that the Bayes predictive density with respect to the harmonic prior $\pi_h(\theta, \eta) =\eta^{-1} \|\theta\|^{2-d}$ dominates $\hat{q}_{mre}$ simultaneously for all scale mixture of normals $f$.

2 citations


Book ChapterDOI
01 Jan 2018
TL;DR: In this article, the authors propose a frequentist risk assessment of the long run performance of an estimator φ(X) of a distribution Pθ parameterized by an unknown parameter θ.
Abstract: Suppose X is an observation from a distribution Pθ parameterized by an unknown parameter θ. In classical decision theory, after selecting an estimation procedure φ(X) of θ, it is typical to evaluate it through a criterion, i.e. a loss, L(θ, φ(X)), which represents the cost incurred by the estimator φ(X) when the unknown parameter equals θ. In the long run, as it depends on the particular value of X, this loss cannot be appropriate to assess the performance of the estimator φ. Indeed, to be valid (in the frequentist sense), a global evaluation of such a statistical procedure should be based on all the possible observations. Consequently, it is common to report the risk R(θ, φ) = Eθ[L(θ, φ(X)] as a gauge of the efficiency of φ (Eθ denotes expectation with respect to Pθ). Thus, we have at our disposal a measure of the long run performance of φ(X) for each value of θ. However, although this notion of risk can effectively be used in comparing φ(X) with other estimators, it is inaccessible since θ is unknown. A common and, in principle, accessible, frequentist risk assessment is the maximum risk \(\bar {R}_\varphi = \sup _\theta R(\theta ,\varphi )\).

1 citations


Book ChapterDOI
01 Jan 2018
TL;DR: In this paper, the authors considered the canonical form of the general linear model introduced in Section 4.5 when a residual vector U is available and showed that the estimators of θ under quadratic loss ∥δ − θ∥2 parallels the normal situation presented in Sections 2.3 and 2.4.
Abstract: In this chapter, we consider the canonical form of the general linear model introduced in Section 4.5 when a residual vector U is available. Recall that (X, U) is a random vector around (θ, 0) (such that dim X = dim θ = p and dim U = dim 0 = k) with a spherically symmetric distribution, that is, (X, U) ∼ SSp+k(θ, 0). Estimation of θ under quadratic loss ∥δ − θ∥2 parallels the normal situation presented in Sections 2.3 and 2.4 where \(X\sim {\mathcal N}_p(\theta , \sigma ^2 I_p)\) (with σ2 known) and the estimators of θ are of the form δ(X) = X + σ2g(X). In the case where σ2 is unknown see Section 2.4.3), the corresponding estimators are

1 citations


Book ChapterDOI
01 Jan 2018
TL;DR: In this article, a Bayesian view of minimax shrinkage estimation is presented, and a general sufficient condition for minimaxity of Bayes and generalized Bayes estimators in the known variance case is derived.
Abstract: As we saw in Chap. 2, the frequentist paradigm is well suited for risk evaluations, but is less useful for estimator construction. It turns out that the Bayesian approach is complementary, as it is well suited for the construction of possibly optimal estimators. In this chapter we take a Bayesian view of minimax shrinkage estimation. In Sect. 3.1 we derive a general sufficient condition for minimaxity of Bayes and generalized Bayes estimators in the known variance case, we also illustrate the theory with numerous examples.

Posted Content
TL;DR: In this paper, the authors review minimax best equivariant estimation in these invariant estimation problems: a location parameter, a scale parameter and a (Wishart) covariance matrix.
Abstract: This paper reviews minimax best equivariant estimation in these invariant estimation problems: a location parameter, a scale parameter and a (Wishart) covariance matrix. We briefly review development of the best equivariant estimator as a generalized Bayes estimator relative to right invariant Haar measure in each case. Then we prove minimaxity of the best equivariant procedure by giving a least favorable prior sequence based on non-truncated Gaussian distributions. The results in this paper are all known, but we bring a fresh and somewhat unified approach by using, in contrast to most proofs in the literature, a smooth sequence of non truncated priors. This approach leads to some simplifications in the minimaxity proofs.

Book ChapterDOI
01 Jan 2018
TL;DR: In this article, the authors consider the problem of estimating a location vector which is constrained to lie in a convex subset of a set and show that the Bayes estimator of the mean with respect to the uniform prior over any convex set dominates X under the usual quadratic loss ∥δ − θ∥2.4.
Abstract: In this chapter, we will consider the problem of estimating a location vector which is constrained to lie in a convex subset of \(\mathbb {R}^P\). Estimators that are constrained to a set should be constrasted to the shrinkage estimators discussed in Sect. 2.4.4 where one has “vague knowledge” that a location vector is in or near the specified set and consequently wishes to shrink toward the set but does not wish to restrict the estimator to lie in the set. Much of the chapter is devoted to one of two types of constraint sets, balls, and polyhedral cones. However, Sect.7.2 is devoted to general convex constraint sets and more particularly to a striking result of Hartigan (2004) which shows that in the normal case, the Bayes estimator of the mean with respect to the uniform prior over any convex set, \(\mathcal {C}\), dominates X for all \(\theta \in \mathcal {C}\) under the usual quadratic loss ∥δ − θ∥2.