scispace - formally typeset
Search or ask a question

Showing papers by "James O. Berger published in 1976"


Journal ArticleDOI
TL;DR: In this article, the problem of estimating the mean of a $p$-variate $(p \geqq 3)$ normal distribution is considered and a class of minimax estimators is given.
Abstract: The problem of estimating the mean of a $p$-variate $(p \geqq 3)$ normal distribution is considered. It is assumed that the covariance matrix $ ot\sum$ is known and that the loss function is quadratic. A class of minimax estimators is given, out of which admissible minimax estimators are developed.

160 citations


Journal ArticleDOI
TL;DR: In this paper, a broad class of minimax estimators for θ under the quadratic loss (δ − θ)t Q(δ− θ), where Q is a known positive definite matrix is developed.

55 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that under certain conditions on $r(X, W), the estimator given componentwise by W is a minimax estimator of the unknown positive definite diagonal covariance matrix.
Abstract: Let $X = (X_1, \cdots, X_p)^t$ be a $p$-variate normal random vector with unknown mean $\theta = (\theta_1, \cdots, \theta_p)^t$ and unknown positive definite diagonal covariance matrix $A$. Assume that estimates $V_i$ of the variances $A_i$ are available, and that $V_i/A_i$ is $\chi^2_{n_i}$. Assume also that all $X_i$ and $V_i$ are independent. It is desired to estimate $\theta$ under the quadratic loss $\lbrack\sum^p_{i=1} q_i(\delta_i - \theta_i)^2\rbrack/\lbrack\sum^p_{i=1} q_i A_i\rbrack,\quad\text{where} q_i > 0, i = 1, \cdots, p.$ Defining $W_i = V_i/(n_i - 2), W = (W_1, \cdots, W_p)^t$, and $\|X\|_{W^2} = \sum^p_{j=1} \lbrack X_{j^2}/(q_jW_j^2)\rbrack$, it is shown that under certain conditions on $r(X, W)$, the estimator given componentwise by $\delta_i(X, W) = (1 - r(X, W)/\lbrack\|X\|_{W^2}q_i W_i\rbrack)X_i$ is a minimax estimator of $\theta$. (The conditions on $r$ require $p \geqq 3$.) A good practical version of this estimator is also given.

38 citations


Journal ArticleDOI
TL;DR: In this paper, a class of good tail minimax estimators are developed and compared with the best invariant estimator, and sufficient conditions for an estimator to be a tail-minimax estimator are given.
Abstract: Let $X = (X_1,\cdots, X_p)^t, p \geqq 3$, have density $f(x - \theta)$ with respect to Lebesgue measure It is desired to estimate $\theta = (\theta_1,\cdots, \theta_p)^t$ under the loss $L(\delta - \theta)$ Assuming the problem has a minimax risk $R_0$, an estimator is defined to be tail minimax if its risk is no larger than $R_0$ outside some compact set Under quite general conditions on $f$ and $L$, sufficient conditions for an estimator to be tail minimax are given A class of good tail minimax estimators is then developed and compared with the best invariant estimator

29 citations


Journal ArticleDOI
TL;DR: In this paper, an asymptotic approximation to the generalized Bayes prior density of a random vector is given, and it is shown that if the density of the vector has enough moments and the smoothness of the prior is sufficiently high, then the estimator is admissible under a squared error loss.
Abstract: Let $X$ be an $n$-dimensional random vector with density $f(x - \theta)$. It is desired to estimate $\theta_1$, under a strictly convex loss $L(\delta - \theta_1)$. If $F$ is a generalized Bayes prior density, the admissibility of the corresponding generalized Bayes estimator, $\delta_F$, is considered. An asymptotic approximation to $\delta_F$ is found. Using this approximation, it is shown that if (i) $f$ has enough moments, (ii) $L$ and $F$ are smooth enough, and (iii) $F(\theta) \leqq K(|\theta_1| + \sum^n_{i=2} \theta_i^2)^{(3-n)/2}$, then $\delta_F$ is admissible for estimating $\theta_1$. For example, assume that $F(\theta) \equiv 1$ and that $L$ is squared error loss. Under appropriate conditions it can be shown that $\delta_F(x) = x_1$, and that $\delta_F$ is the best invariant estimator. If, in addition, $f$ has 7 absolute moments and $n \leqq 3$, it can be concluded that $\delta_F$ is admissible.

22 citations



Journal ArticleDOI
TL;DR: In this paper, it was shown that the best invariant estimator of $(\theta_1, \theta-2) is inadmissible under certain conditions on the loss function.
Abstract: Let $X = (X_1, X_2, X_3)$ be a random vector with density $f(x - \theta)$, where $\theta = (\theta_1, \theta_2, \theta_3)$ is unknown. It is desired to estimate $(\theta_1, \theta_2)$ using an estimator $(\delta_1(X), \delta_2(X))$, and under a loss function $L(\delta_1 - \theta_1, \delta_2 - \theta_2)$. (Note that $\theta_3$ is a nuisance parameter.) Under certain conditions on $f$ and $L$, it is shown that the best invariant estimator of $(\theta_1, \theta_2)$ is inadmissible.

3 citations