scispace - formally typeset
Open AccessJournal ArticleDOI

The Rate of Convergence of Consistent Point Estimators

James C. Fu
- 01 Jan 1975 - 
- Vol. 3, Iss: 1, pp 234-240
Reads0
Chats0
TLDR
In this paper, an upper bound for the rate of convergence of consistent estimators based on sample quantities is derived for the case when the underlying distribution is double-exponential and shown to coincide with the classical Pitman asymptotic relative efficiency.
Abstract
The rate at which the probability $P_\theta\{|t_n - \theta| \geqq \varepsilon\}$ of consistent estimator $t_n$ tends to zero is of great importance in large sample theory of point estimation. The main tools available at present for finding the rate are Bernstein-Chernoff-Bahadur's theorem and Sanov's theorem. In this paper, we give two new techniques for finding the rate of convergence of certain consistent estimators. By using these techniques, we have obtained an upper bound for the rate of convergence of consistent estimators based on sample quantities and proved that the sample median is an asymptotically efficient estimator in Bahadur's sense if and only if the underlying distribution is double-exponential. Furthermore, we have proved that the Bahadur asymptotic relative efficiency of sample mean and sample median coincides with the classical Pitman asymptotic relative efficiency.

read more

Citations
More filters
Journal ArticleDOI

Tail behavior of regression estimators and their breakdown points

TL;DR: In this article, a finite sample measure of performance of regression estimators based on tail behavior is introduced, which is essentially the same as the finite sample concept of breakdown point introduced by Donoho and Huber (1983).
Journal ArticleDOI

On moderate deviation theory in estimation

TL;DR: In this article, the performance of a sequence of estimators in a neighborhood of the neighborhood of interest is investigated. And the importance and usefulness of classical results concerning local or non-local efficiency can gather strength by extending to larger regions of neighborhoods; in that way one can investigate where optimality passes into non-optimality if for instance an estimator is locally efficient and non-locally non-efficient.
Journal ArticleDOI

Large Sample Point Estimation: A Large Deviation Theory Approach

James C. Fu
- 01 Sep 1982 - 
TL;DR: In this paper, the exponential rates of decrease and bounds on tail probabilities for consistent estimators were studied using large deviation methods, and the asymptotic expansions of Bahadur bounds and exponential rates in the case of the maximum likelihood estimator were obtained.
Journal ArticleDOI

Large deviations for M-estimators

TL;DR: In this article, the authors study the large deviation principle for M-estimators (and maximum likelihood estimators in particular) and obtain the rate function of the LDA for M estimators.
Journal ArticleDOI

Estimates of Location: A Large Deviation Comparison

TL;DR: In this paper, the authors consider the estimation of a location parameter in a one-sample problem and show that the asymptotic performance of a sequence of estimates is measured by the exponential rate of convergence to 0.