scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper suggests an approach to augment the performance of a learning algorithm for the mental task classification on the utility of power spectral density (PSD) using feature selection, and deals a comparative analysis of multivariate and univariate feature selection formental task classification.
Abstract: In this paper, classification of mental task-root brain–computer interfaces (BCIs) is being investigated. The mental tasks are dominant area of investigations in BCI, which utmost interest as these system can be augmented life of people having severe disabilities. The performance of BCI model primarily depends on the construction of features from brain, electroencephalography (EEG), signal, and the size of feature vector, which are obtained through multiple channels. The availability of training samples to features are minimal for mental task classification. The feature selection is used to increase the ratio for the mental task classification by getting rid of irrelevant and superfluous features. This paper suggests an approach to augment the performance of a learning algorithm for the mental task classification on the utility of power spectral density (PSD) using feature selection. This paper also deals a comparative analysis of multivariate and univariate feature selection for mental task classification. After applying the above stated method, the findings demonstrate substantial improvements in the performance of learning model for mental task classification. Moreover, the efficacy of the proposed approach is endorsed by carrying out a robust ranking algorithm and Friedman’s statistical test for finding the best combinations and compare various combinations of PSD and feature selection methods.

27 citations


Cites background from "A Measure of Asymptotic Efficiency ..."

  • ...For multivariate normal probability distribution, Chernoff distance measure is defined as [62]...

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of determining the energy distribution of quantum states that satisfy exponential decay of correlation and product states, with respect to a quantum local Hamiltonian on a spin lattice.
Abstract: We consider the problem of determining the energy distribution of quantum states that satisfy exponential decay of correlation and product states, with respect to a quantum local Hamiltonian on a spin lattice. For a quantum state on a D-dimensional lattice that has correlation length σ and has average energy e with respect to a given local Hamiltonian (with n local terms, each of which has norm at most 1), we show that the overlap of this state with eigenspace of energy f is at most . This bound holds whenever . Thus, on a one-dimensional lattice, the tail of the energy distribution decays exponentially with the energy. For product states, we improve above result to obtain a Gaussian decay in energy, even for quantum spin systems without an underlying lattice structure. Given a product state on a collection of spins which has average energy e with respect to a local Hamiltonian (with n local terms and each local term overlapping with at most m other local terms), we show that the overlap of this state with eigenspace of energy f is at most . This bound holds whenever .

27 citations

Journal ArticleDOI
TL;DR: The least median-of-squares (LMS) regression line estimator is one of the best known robust estimators as discussed by the authors, and it can be computed in O(n log 2 n ) time.

27 citations

Book ChapterDOI
Qi-Man Shao1
01 Jan 1998
TL;DR: A survey of self-normalized limit theorems can be found in this article, where it is shown that hardly any moment conditions are needed for such large deviation type results.
Abstract: Publisher Summary The normalizing constants in classical limit theorems are usually sequences of real numbers. Moment conditions or other related assumptions are necessary and sufficient for many classical limit theorems. However, the situation becomes very different when the normalizing constants are sequences of random variables. The recent discovery of self-normalized large deviation shows that hardly any moment conditions are needed for such large deviation type results. A self-normalized law of the iterated logarithm remains valid for all distributions in the domain of attraction of a normal or stable law. This reveals that self-normalization preserves essential properties much better than deterministic normalization does. This chapter surveys briefly some recent developments on self-normalized limit theorems as Limit theory plays a fundamental role in the development of probability and statistics.

27 citations

Journal ArticleDOI
TL;DR: This paper presents distribution-free testers for these classes with query complexity e O(n 1=2 =e) and defines and exploits certain structural properties of monomials (and functions that differ from them on a non-negligible part of the input space), which were not used in previous work on property testing.
Abstract: We consider the problem of distribution-free testing of the class of monotone monomials and the class of monomials over n variables. While there are very efficient testers for a variety of classes of functions when the underlying distribution is uniform, designing distribution-free testers (which must work under an arbitrary and unknown distribution) tends to be more challenging. When the underlying distribution is uniform, Parnas et al. (SIAM J. Discr. Math., 2002) give a tester for (monotone) monomials whose query complexity does not depend on n, and whose dependence on the distance parameter is (inverse) linear. In contrast, Glasner and Servedio (Theory of Computing, 2009) prove that every distribution-free tester for monotone monomials as well as for general monomials must have query complexity e W(n 1=5 ) (for a constant distance parameter e). In this paper we present distribution-free testers for these classes with query complexity e O(n 1=2 =e). We note that in contrast to previous results for distribution-free testing, our testers do not build on the testers that work under the uniform distribution. Rather, we define and exploit certain structural properties of monomials (and functions that differ from them on a non-negligible part of the input space), which were not used in previous work on property testing.

27 citations


Cites methods from "A Measure of Asymptotic Efficiency ..."

  • ...Letting c1 = 64 (so that s = 64 √ n/ε), by the foregoing discussion and by applying a multiplicative Chernoff bound [2], we have that...

    [...]

References
More filters