scispace - formally typeset
Search or ask a question

Showing papers by "James O. Berger published in 1982"


Journal ArticleDOI
TL;DR: In this article, the problem of estimating a $p$-variate normal mean under arbitrary quadratic loss when $p \geq 3$ is considered is considered and a relatively simple minimax estimator is developed which allows the user to select the region in which significant improvement over the standard mean is to be achieved.
Abstract: The problem of estimating a $p$-variate normal mean under arbitrary quadratic loss when $p \geq 3$ is considered. Any estimator having uniformly smaller risk than the maximum likelihood estimator $\delta^0$ will have significantly smaller risk only in a fairly small region of the parameter space. A relatively simple minimax estimator is developed which allows the user to select the region in which significant improvement over $\delta^0$ is to be achieved. Since the desired region of improvement should probably be chosen to coincide with prior beliefs concerning the whereabouts of the normal mean, the estimator is also analyzed from a Bayesian viewpoint.

77 citations


Journal ArticleDOI
TL;DR: In this article, a class of minimax estimators that closely mimic the conjugate prior Bayes estimators is introduced, which provides a justification, in terms of robustness with respect to misspecification of the prior, for employing the Stein effect, even when combining a priori independent problems.
Abstract: In simultaneous estimation of normal means, it is shown that through use of the Stein effect surprisingly large gains of a Bayesian nature can be achieved, at little or no cost, if the prior information is misspecified. This provides a justification, in terms of robustness with respect to mis-specification of the prior, for employing the Stein effect, even when combining a priori independent problems (i.e., problems in which no empirical Bayes effects are obtainable). To study this issue, a class of minimax estimators that closely mimic the conjugate prior Bayes estimators is introduced.

55 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider a control problem which, in canonical form, is the problem of estimating the probability of an observation from a $p-variate normal distribution with unknown mean and identity covariance matrix.
Abstract: Let $\mathbf{X} = (X_1, \cdots, X_p)^t$ be an observation from a $p$-variate normal distribution with unknown mean $\mathbf{\theta} = (\theta_1, \cdots, \theta_p)^t$ and identity covariance matrix. We consider a control problem which, in canonical form, is the problem of estimating $\mathbf{\theta}$ under the loss $L(\mathbf{\theta, \delta}) = (\mathbf{\theta}^t \mathbf{\delta} - 1)^2$, where $\mathbf{\delta(x)} = (\delta_1(\mathbf{x}), \cdots, \delta_p(\mathbf{x}))^t$ is the estimate of $\mathbf{\theta}$ for a given $\mathbf{x}$. General theorems are given for establishing admissibility or inadmissibility of estimators in this problem. As an application, it is shown that estimators of the form $\mathbf{\delta(x)} = (|\mathbf{x}|^2 + c)^{-1}\mathbf{x} + |\mathbf{x}|^{-4}w(|\mathbf{x}|)\mathbf{x}$, where $w(|\mathbf{x}|)$ tends to zero as $|\mathbf{x}| \rightarrow \infty$, are inadmissible if $c > 5 - p$, but are admissible if $c \leq 5 - p$ and $\mathbf{\delta}$ is generalized Bayes for an appropriate prior measure. Also, an approximation to generalized Bayes estimators for large $|\mathbf{x}|$ is developed.

11 citations


Book ChapterDOI
01 Jan 1982
TL;DR: In this paper, a more complicated expansion of the Karhunen-Loeve expansion is developed in which the desired minimax estimator is also derived, allowing use of all the prior information in selecting a minimax estimate.
Abstract: Publisher Summary A version of the Karhunen–Loeve expansion of X(·) has been used to reduce the estimation problem to that of estimating a countable sequence of normal means {θi}. The prior information concerning θ(·) was also transformed into prior information about the θi, but in selecting a minimax estimator using the prior information, the covariances among the θi were ignored. This chapter discusses a more complicated expansion of the process. It allows use of all the prior information in selecting a minimax estimator. This expansion is developed in which the desired minimax estimator is also derived.

9 citations