scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

01 Dec 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 4, pp 493-507
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.
Citations
More filters
Proceedings ArticleDOI
22 Apr 2001
TL;DR: This paper describes a new algorithm that uses Structural-EM for learning maximum likelihood trees, and proves that each iteration of this procedure increases the likelihood of the topology, and thus the procedure must converge.
Abstract: A central task in the study of evolution is the reconstruction of a phylogenetic tree from sequences of current-day taxa. A well supported approach to tree reconstruction performs maximum likelihood (ML) analysis. Unfortunately, searching for the maximum likelihood phylogenetic tree is computationally expensive. In this paper, we describe a new algorithm that uses Structural-EM for learning maximum likelihood trees. This algorithm is similar to the standard EM method for estimating branch lengths, except that during iterations of this algorithms the topology is improved as well as the branch length. The algorithm performs iterations of two steps. In the E-Step, we use the current tree topology and branch lengths to compute expected sufficient statistics, which summarize the data. In the M-Step, we search for a topology that maximizes the likelihood with respect to these expected sufficient statistics. As we show, searching for better topologies inside the M-step can be done efficiently, as opposed to standard search over topologies. We prove that each iteration of this procedure increases the likelihood of the topology, and thus the procedure must converge. We evaluate our new algorithm on both synthetic and real sequence data, and show that it is both dramatically faster and finds more plausible trees than standard search for maximum likelihood phylogenies.

110 citations

Journal ArticleDOI
TL;DR: In this paper, a risk premium is defined as the price of risk and the degree of exposure to risk in a decentralized security market, and the risk premium reflects both the price and exposure of a security to macroeconomic shocks.
Abstract: Asset pricing theory has long recognized that financial markets compensate investors who are exposed to some components of uncertainty. This is where macroeconomics comes into play. The economywide shocks, the primary concern of macroeconomists, by their nature are not diversifiable. Exposures to these shocks cannot be averaged out with exposures to other shocks. Thus returns on assets that depend on these macroeconomic shocks reflect “risk” premia and are a linchpin connecting macroeconomic uncertainty to financial markets. A risk premium reflects both the price of risk and the degree of exposure to risk. I will be particularly interested in how the exposures to macroeconomic impulses are priced by decentralized security markets.

109 citations

Journal ArticleDOI
TL;DR: An opportunistic resource sharing-based mapping framework, ORS, where substrate resources are opportunistically shared among multiple virtual networks, and it is proved that ORS provides a more efficient utilization of substrate resources than two state-of-the-art fixed-resource embedding schemes.
Abstract: Network virtualization has emerged as a promising approach to overcome the ossification of the Internet. A major challenge in network virtualization is the so-called virtual network embedding problem, which deals with the efficient embedding of virtual networks with resource constraints into a shared substrate network. A number of heuristics have been proposed to cope with the NP-hardness of this problem; however, all of the existing proposals reserve fixed resources throughout the entire lifetime of a virtual network. In this paper, we re-examine this problem with the position that time-varying resource requirements of virtual networks should be taken into consideration, and we present an opportunistic resource sharing-based mapping framework, ORS, where substrate resources are opportunistically shared among multiple virtual networks. We formulate the time slot assignment as an optimization problem; then, we prove the decision version of the problem to be NP-hard in the strong sense. Observing the resemblance between our problem and the bin packing problem, we adopt the core idea of first-fit and propose two practical solutions: first-fit by collision probability (CFF) and first-fit by expectation of indicators' sum (EFF). Simulation results show that ORS provides a more efficient utilization of substrate resources than two state-of-the-art fixed-resource embedding schemes.

108 citations


Cites background from "A Measure of Asymptotic Efficiency ..."

  • ...…lifetime of the respective VN request, we must allocate the required number of dedicated time slots for them; however, for the variable subrequirements, since they occur with a probability that is less than 1, sharing may be a viable choice to conserve substrate resources for future VN requests....

    [...]

Journal ArticleDOI
TL;DR: It is proved that R(N)=N1/4+o(1) thus showing that Roth’s original lower bound was essentially best possible, and the notion ofdiscrepancy of hypergraphs is introduced and derive an upper bound from which the above result follows.
Abstract: Letg be a coloring of the set {1, ...,N} = [1,N] in red and blue. For each arithmetic progressionA in [1,N], consider the absolute value of the difference of the numbers of red and of blue members ofA. LetR(g) be the maximum of this number over all arithmetic progression (thediscrepancy ofg). Set $$R(N) = \mathop {\min }\limits_g R(g)$$ over all two-coloringsg. A remarkable result of K. F. Roth gives*R(N)≫N 1/4. On the other hand, Roth observed thatR(N)≪N 1/3+ɛ and suggested that this bound was nearly sharp. A. Sarkozy disproved this by provingR(N)≪N 1/3+ɛ. We prove thatR(N)=N 1/4+o(1) thus showing that Roth’s original lower bound was essentially best possible. Our result is more general. We introduce the notion ofdiscrepancy of hypergraphs and derive an upper bound from which the above result follows.

108 citations

Proceedings ArticleDOI
01 Dec 2015
TL;DR: Simulations with three measurement tasks show that SCREAM can support more measurement tasks with higher accuracy than existing approaches, and can multiplex resources among network-wide measurement tasks.
Abstract: Software-defined networks can enable a variety of concurrent, dynamically instantiated, measurement tasks, that provide fine-grain visibility into network traffic. Recently, there have been many proposals for using sketches for network measurement. However, sketches in hardware switches use constrained resources such as SRAM memory, and the accuracy of measurement tasks is a function of the resources devoted to them on each switch. This paper presents SCREAM, a system for allocating resources to sketch-based measurement tasks that ensures a user-specified minimum accuracy. SCREAM estimates the instantaneous accuracy of tasks so as to dynamically adapt the allocated resources for each task. Thus, by finding the right amount of resources for each task on each switch and correctly merging sketches at the controller, SCREAM can multiplex resources among network-wide measurement tasks. Simulations with three measurement tasks (heavy hitter, hierarchical heavy hitter, and super source/destination detection) show that SCREAM can support more measurement tasks with higher accuracy than existing approaches.

107 citations


Cites background or methods from "A Measure of Asymptotic Efficiency ..."

  • ...The reason is that the networkwide collision (a random variable) is the sum of collisions at individual switches (sum of random variables) [11]....

    [...]

  • ...However, since the collision on a sketch is independent from the collision on another, we can replace Markov’s bound with Chernoff’s bound [11] to get a more accurate estimation of p j (see our technical report [34])....

    [...]

References
More filters