scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Proceedings ArticleDOI
05 Jan 1997
TL;DR: This paper presents a conceptually simple randomized Las Vegas approximation algorithm for LMS, which runs in O(n log n) time, which provides an attractive option for practitioners, combining both the efficiency of a Monte Carlo algorithm and guarantees on the accuracy of the result.
Abstract: The problem of fitting a straight line to a finite collection of points in the plane is an important problem in statistical estimation. Robust estimators are particularly important because of their lack of sensitivity to outlying data points. The basic measure of the robustness of an estimator is its breakdown point, that is, the fraction (up to 50%) of outlying data points that can corrupt the estimator. Rousseeuw`s least median-of-squares (LMS) regression (line) estimator is among the best known 50% breakdown-point estimators. The best exact algorithms known for this problem run in O(n{sup 2}) time, where n is the number of data points. Because of this high running time, many practitioners prefer to use a simple O(n log n) Monte Carlo algorithm, which is quite efficient but provides no guarantees of accuracy (even probabilistic) unless the data set satisfies certain assumptions. In this paper, we present two algorithms in an attempt to close the gap between theory and practice. The first is a conceptually simple randomized Las Vegas approximation algorithm for LMS, which runs in O(n log n) time. However, this algorithm relies on somewhat complicated data structures to achieve its efficiency. The second is a practical randomized algorithm formore » LMS that uses only simple data structures. It can be run as either an exact or an approximation algorithm. This algorithm runs no slower than O(n{sup 2} log n) time, but we present empirical evidence that its running time on realistic data sets is much better. This algorithm provides an attractive option for practitioners, combining both the efficiency of a Monte Carlo algorithm and guarantees on the accuracy of the result.« less

50 citations


Cites methods from "A Measure of Asymptotic Efficiency ..."

  • ...Our analysis makes use of the Chernoff bounds on the tail of the binomial distribution (Chernoff, 1952)....

    [...]

  • ...Our analysis makes use of the Chernoff bounds on the tail of the binomial distribution (Chernoff, 1952). Proofs of these particular bounds have been presented by Motwani and Raghavan (1995) and Hagerup and Rüb (1990)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the principle of the largest term and the concept of large deviation in the general setting, and present a generalization of the principle to the case of measures on compact spaces.
Abstract: Contents §1. Introduction §2. The principle of the largest term 2.1. The general setting 2.2. The principle of the largest term 2.3. Upper and lower deviation functions 2.4. Concentration of measures on compact spaces §3. Vague large deviation principles and Ruelle-Lanford functions 3.1. Vague large deviation principles 3.2. Ruelle-Lanford functions §4. Examples §5. Narrow large deviation principles and exponential tightness 5.1. Narrow large deviation principles 5.2. Exponential tightness 5.3. Concentration of exponentially tight measures §6. Large deviation principles and Varadhan's theorems 6.1. Large deviation principles 6.2. Varadhan's theorems §7. Convexity 7.1. The scaled generating function 7.2. Weak law of large numbers and the differentiability of the pressure Bibliography

50 citations


Cites background from "A Measure of Asymptotic Efficiency ..."

  • ...We saw that entropy enters in Lanford's proof of Cramer's theorem; another proof, due to Chernoff [14], makes use of the pressure....

    [...]

Dissertation
01 Jan 2005
TL;DR: This dissertation studies a broad class of stochastic scheduling problems characterized by the presence of hard deadline constraints, and develops approximation algorithms: algorithms that run in polynomial time and compute a policy whose expected value is provably close to that of an optimal adaptive policy.
Abstract: In this dissertation we study a broad class of stochastic scheduling problems characterized by the presence of hard deadline constraints. The input to such a problem is a set of jobs, each with an associated value, processing time, and deadline. We would like to schedule these jobs on a set of machines over time. In our stochastic setting, the processing time of each job is random, known in advance only as a probability distribution (and we make no assumptions about the structure of this distribution). Only after a job completes do we know its actual “instantiated” processing time with certainty. Each machine can process only a singe job at a time, and each job must be assigned to only one machine for processing. After a job starts processing we require that it must be allowed to complete—it cannot be canceled or “preempted” (put on hold and resumed later). Our goal is to devise a scheduling policy that maximizes the expected value of jobs that are scheduled by their deadlines. A scheduling policy observes the state of our machines over time, and any time a machine becomes available for use, it selects a new job to execute on that machine. Scheduling policies can be classified as adaptive or non-adaptive based on whether or not they utilize information learned from the instantiation of processing times of previously-completed jobs in their future scheduling decisions. A novel aspect of our work lies in studying the benefit one can obtain through adaptivity, as we show that for all of our stochastic scheduling problems, adaptivity can only allow us to improve the expected value obtained by an optimal policy by at most a small constant factor. All of the problems we consider are at least NP-hard since they contain the deterministic 0/1 knapsack problem as a special case. We therefore seek to develop approximation algorithms: algorithms that run in polynomial time and compute a policy whose expected value is provably close to that of an optimal adaptive policy. For all the problems we consider, we can approximate the expected value obtained by an optimal adaptive policy to within a small constant factor (which depends on the problem under consideration, but is always less than 10). A small handful of our results are pseudo-approximation algorithms, delivering an approximately optimal policy that is feasible with respect to a slightly expanded set of deadlines. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.) (Abstract shortened by UMI.)

50 citations

Proceedings ArticleDOI
10 Jun 2014
TL;DR: Simulation results show that the proposed CSI acquisition scheme with stochastic beamforming can significantly reduce the CSI overhead while providing performance close to that with full CSI.
Abstract: Cloud radio access network (Cloud-RAN) is a promising network architecture to meet the explosive growth of the mobile data traffic. In this architecture, as all the baseband signal processing is shifted to a single baseband unit (BBU) pool, interference management can be efficiently achieved through coordinated beamforming, which, however, often requires full channel state information (CSI). In practice, the overhead incurred to obtain full CSI will dominate the available radio resource. In this paper, we propose a unified framework for the CSI overhead reduction and downlink coordinated beamforming. Motivated by the channel heterogeneity phenomena in large-scale wireless networks, we first propose a novel CSI acquisition scheme, called compressive CSI acquisition, which will obtain instantaneous CSI of only a subset of all the channel links and statistical CSI for the others, thus forming the mixed CSI at the BBU pool. This subset is determined by the statistical CSI. Then we propose a new stochastic beamforming framework to minimize the total transmit power while guaranteeing quality-of-service (QoS) requirements with the mixed CSI. Simulation results show that the proposed CSI acquisition scheme with stochastic beamforming can significantly reduce the CSI overhead while providing performance close to that with full CSI.

50 citations


Cites methods from "A Measure of Asymptotic Efficiency ..."

  • ...In order to derive an analytic expression for the minimum J∗, we use the Chernoff’s inequality [17] to yield an upper bound for ∑NK−1 i=1 ( J...

    [...]

References
More filters