scispace - formally typeset
Search or ask a question

Showing papers by "William E. Strawderman published in 2020"



Journal ArticleDOI
TL;DR: In this paper, the mean vector under invariant quadratic loss for a spherically symmetric location family with a residual vector with density of the form $f(x,u)=\eta ^{(p+n)/2}f(\eta \{\|x-\theta \|^{2}+\|u\|''2''), where $\eta $ is unknown.
Abstract: This paper investigates estimation of the mean vector under invariant quadratic loss for a spherically symmetric location family with a residual vector with density of the form $f(x,u)=\eta ^{(p+n)/2}f(\eta \{\|x-\theta \|^{2}+\|u\|^{2}\})$, where $\eta $ is unknown. We show that the natural estimator $x$ is admissible for $p=1,2$. Also, for $p\geq 3$, we find classes of generalized Bayes estimators that are admissible within the class of equivariant estimators of the form $\{1-\xi (x/\|u\|)\}x$. In the Gaussian case, a variant of the James–Stein estimator, $[1-\{(p-2)/(n+2)\}/\{\|x\|^{2}/\|u\|^{2}+(p-2)/(n+2)+1\}]x$, which dominates the natural estimator $x$, is also admissible within this class. We also study the related regression model.

11 citations


Journal ArticleDOI
TL;DR: This work provides, for increasing and concave $\rho$ and $\ell$ which also satisfy a completely monotone property, Baranchik-type estimators of $\theta$ which dominate the benchmark $\delta_0(X)=X$ for $X$ either distributed as multivariate normal or as a scale mixture of normals.

9 citations


Journal ArticleDOI
TL;DR: In this paper, decision theoretic properties of Stein type shrinkage estimators in simultaneous estimation of location parameters in a multivariate skew-normal distribution with known sigmoid distribution were studied.
Abstract: This paper studies decision theoretic properties of Stein type shrinkage estimators in simultaneous estimation of location parameters in a multivariate skew-normal distribution with known s...

7 citations


Book ChapterDOI
01 Jan 2020
TL;DR: The statistical literature acknowledges that spatial and temporal associations are captured most effectively using models that build dependencies in different stages or hierarchies as mentioned in this paper, and that hierarchical models are especially advantageous with data sets that have several lurking sources of uncertainty and dependence.
Abstract: Proliferation of spatially indexed data (i.e., variable measurements are associated with a spatial location) has spurred considerable development in statistical modeling. Key texts in this field include Cressie (1993), Cressie and Wikle (2011), Chiles and Delfiner (1999), Moller and Waagepetersen (2003), Schabenberger and Gotway (2004), Wackernagel (2003), Diggle and Ribeiro (2007), and Banerjee et al. (2014). The statistical literature acknowledges that spatial (and temporal) associations are captured most effectively using models that build dependencies in different stages or hierarchies. As illustrated in Sect. 6.2, hierarchical models are especially advantageous with data sets that have several lurking sources of uncertainty and dependence.

2 citations


Posted Content
TL;DR: In this paper, the admissibility of a subclass of generalized Bayes estimators of a multivariate normal vector when the variance is unknown, under scaled quadratic loss, was studied.
Abstract: We study admissibility of a subclass of generalized Bayes estimators of a multivariate normal vector when the variance is unknown, under scaled quadratic loss. Minimaxity is also established for certain of these estimators.

2 citations


Posted Content
TL;DR: In this article, the authors investigate Bayes estimation of a normal mean matrix under the matrix quadratic loss and show that the Efron-Morris estimator is minimax.
Abstract: We investigate Bayes estimation of a normal mean matrix under the matrix quadratic loss, which is viewed as a class of loss functions including the Frobenius loss and quadratic loss for each column. First, we derive an unbiased estimate of risk and show that the Efron--Morris estimator is minimax. Next, we introduce a notion of matrix superharmonicity for matrix-variate functions and show that it has analogous properties with usual superharmonic functions, which may be of independent interest. Then, we show that the generalized Bayes estimator with respect to a matrix superharmonic prior is minimax. We also provide a class of matrix superharmonic priors that include the previously proposed generalization of Stein's prior. Numerical results demonstrate that matrix superharmonic priors work well for low rank matrices.

2 citations


Book ChapterDOI
01 Jan 2020
TL;DR: In this paper, the authors discuss several approaches to specifying priors and define conjugate priors, and discuss the concept of non-informative and improper priors.
Abstract: Selecting a prior distribution is integral to Bayesian analyses. In this chapter, we discuss several approaches to specifying priors. First, we discuss the concept of “noninformative” priors. Next we introduce improper priors. Following this, we define conjugate priors. We conclude with a brief discussion of how a scientist might specify an informative prior.

1 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the distribution of Y is not stochastically ordered in a > 0, and extensions to spherically symmetric and skew-normal distributions were provided to other quadratic forms.

1 citations


Journal ArticleDOI
01 Jun 2020
TL;DR: Chang and Strawderman as mentioned in this paper consider Pitman closeness domination in predictive density estimation problems when the underlying loss metric is ρ -divergence, a loss introduced by Csiszar (Stud Sci Math Hung 2:299-318, 1967).
Abstract: We consider Pitman closeness domination in predictive density estimation problems when the underlying loss metric is $$\alpha$$ -divergence, $$\{D(\alpha )\}$$ , a loss introduced by Csiszar (Stud Sci Math Hung 2:299–318, 1967). The underlying distributions considered are normal location-scale models, including the distribution of the observables, the distribution of the variable, whose density is to be predicted, and the estimated predictive density which will be taken to be of the plug-in type. The scales may be known or unknown. Chang and Strawderman (J Multivar Anal 128:1–9, 2014) have derived a general expression for the $$\alpha$$ -divergence loss in this setup, and have shown that it is a concave monotone function of quadratic loss, and also a function of the variances (predicand, and plug-in). We demonstrate $$\{D(\alpha )\}$$ Pitman closeness domination of certain plug-in predictive densities over others for the entire class of metrics simultaneously when modified Pitman closeness domination holds in the related problem of estimating the mean. We also establish $$\{D(\alpha )\}$$ Pitman closeness results for certain generalized Bayesian (best invariant) predictive density estimators. Examples of $$\{D(\alpha )\}$$ Pitman closeness domination presented relate to the problem of estimating the predictive density of the variable with the larger mean. We also consider the case of two-ordered normal means with a known covariance matrix.

1 citations


Posted Content
TL;DR: In this article, an unbiased estimate of risk is derived and the Efron-Morris estimator is shown to be minimax, while the generalized Bayes estimator with respect to a matrix superharmonic prior is minimax.
Abstract: We investigate estimation of a normal mean matrix under the matrix quadratic loss. Improved estimation under the matrix quadratic loss implies improved estimation of any linear combination of the columns. First, an unbiased estimate of risk is derived and the Efron--Morris estimator is shown to be minimax. Next, a notion of \textit{matrix superharmonicity} for matrix-variate functions is introduced and shown to have analogous properties with usual superharmonic functions, which may be of independent interest. Then, we show that the generalized Bayes estimator with respect to a matrix superharmonic prior is minimax. We also provide a class of matrix superharmonic priors that includes the previously proposed generalization of Stein's prior. Numerical results demonstrate that matrix superharmonic priors work well for low rank matrices.

Book ChapterDOI
01 Jan 2020
TL;DR: In this article, the authors consider the situation in which a scientist would like to select one of the candidate models to use for inference, and they discuss hypothesis testing and model selection.
Abstract: During the course of a scientific investigation, it is common to consider more than one model to explain or predict the phenomenon of interest. In some cases, it might be wise to cull the candidate models to a manageable number, and then use model averaging methods. However, often a scientist needs/wants to select a single model. In this chapter we consider the situation in which a scientist would like to select one of the candidate models to use for inference. Model selection shares many concepts with hypothesis testing, and so we begin this chapter with a discussion of hypothesis testing.

Book ChapterDOI
01 Jan 2020
TL;DR: In this paper, the authors present a review of commonly used probability distributions for Bayesian inference with respect to the prior distribution and the sampling distribution for the observable data, showing that the most important choices in Bayesian inferring usually involve the choices of distributions to represent the state of knowledge before data is collected (prior distribution).
Abstract: Effective Bayesian inference requires familiarity with probability distributions. In fact, as will be seen in subsequent chapters, the most important choices in Bayesian inference usually involve the choices of distributions to represent the state of knowledge before data is collected (prior distribution) and the sampling distribution, also referred to as the data model, for the observable data. Hence in this chapter we will review some commonly used probability distributions. Readers already knowledgeable about probability distributions should skim this chapter to insure that they are comfortable with our notation and terminology.

Book ChapterDOI
01 Jan 2020
TL;DR: In this article, the authors start with some relatively simple Bayesian models and set the stage for the more sophisticated models covered in later chapters, such as linear regression and Poisson regression.
Abstract: Before considering more advanced models which might be used in lieu of standard non-Bayesian approaches such as linear regression or Poisson regression, we start with some relatively simple Bayesian models. These will set the stage for the more sophisticated models covered in later chapters.

Book ChapterDOI
01 Jan 2020
TL;DR: In this paper, the authors present a situation in which an observed dependent variable, which may or may not be related to observed covariates, is not amenable to analysis via the usual linear model methodology.
Abstract: Oftentimes, researchers are confronted with a situation in which an observed dependent variable, which may or may not be related to observed covariates, is not amenable to analysis via “usual” linear model methodology. For instance, the dependent variable may be a count of some phenomenon, e.g., the number of individuals per plot. Since counts are discrete, they clearly fail the usual normality assumption for dependent variables, which among other things, specifies the dependent variable is continuous. Or perhaps the dependent variable is a proportion, constrained to lie in the interval (0,1), e.g., the proportion of habitable land in a given area. The latter also fails the normality assumption, which specifies that the dependent variable is defined on the interval (-\(\infty , \infty \)).