scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 2006"


Journal ArticleDOI
TL;DR: The sequential tree-reweighted message passing (STE-TRW) algorithm as discussed by the authors is a modification of Tree-Reweighted Maximum Product Message Passing (TRW), which was proposed by Wainwright et al.
Abstract: Algorithms for discrete energy minimization are of fundamental importance in computer vision. In this paper, we focus on the recent technique proposed by Wainwright et al. (Nov. 2005)- tree-reweighted max-product message passing (TRW). It was inspired by the problem of maximizing a lower bound on the energy. However, the algorithm is not guaranteed to increase this bound - it may actually go down. In addition, TRW does not always converge. We develop a modification of this algorithm which we call sequential tree-reweighted message passing. Its main property is that the bound is guaranteed not to decrease. We also give a weak tree agreement condition which characterizes local maxima of the bound with respect to TRW algorithms. We prove that our algorithm has a limit point that achieves weak tree agreement. Finally, we show that, our algorithm requires half as much memory as traditional message passing approaches. Experimental results demonstrate that on certain synthetic and real problems, our algorithm outperforms both the ordinary belief propagation and tree-reweighted algorithm in (M. J. Wainwright, et al., Nov. 2005). In addition, on stereo problems with Potts interactions, we obtain a lower energy than graph cuts

1,116 citations


Book ChapterDOI
15 Aug 2006
TL;DR: An online deterministic algorithm named BAR is presented and it is proved that it is 4.56-competitive, which improves the previous algorithm of Kim and Chwa which was shown to be 5-competitive by Chan et al.
Abstract: We study an on-line broadcast scheduling problem in which requests have deadlines, and the objective is to maximize the weighted throughput, i.e., the weighted total length of the satisfied requests. For the case where all requested pages have the same length, we present an online deterministic algorithm named BAR and prove that it is 4.56-competitive. This improves the previous algorithm of Kim and Chwa [11] which is shown to be 5-competitive by Chan et al. [4]. In the case that pages may have different lengths, we prove a lower bound of Ω(Δ/logΔ) on the competitive ratio where Δ is the ratio of maximum to minimum page lengths. This improves upon the previous $\sqrt{\Delta}$ lower bound in [11,4] and is much closer to the current upper bound of ($\Delta+2\sqrt{\Delta}+2$) in [7]. Furthermore, for small values of Δ we give better lower bounds.

669 citations


Journal ArticleDOI
TL;DR: The information rate of finite-state source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum-product recursion on the joint source/ channel trellis.
Abstract: The information rate of finite-state source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum-product recursion on the joint source/channel trellis. This method is extended to compute upper and lower bounds on the information rate of very general channels with memory by means of finite-state approximations. Further upper and lower bounds can be computed by reduced-state methods

598 citations


Journal ArticleDOI
TL;DR: In this article, the impact of flavour in thermal leptogenesis, including the quantum oscillations of the asymmetries in lepton flavour space, has been studied, and it has been shown that when flavour dynamics are included, there is no model-independent limit on the light neutrino mass scale, and that the lower bound on the reheat temperature is relaxed by a factor
Abstract: We study the impact of flavour in thermal leptogenesis, including the quantum oscillations of the asymmetries in lepton flavour space. In the Boltzmann equations we find different numerical factors and additional terms which can affect the results significantly. The upper bound on the CP asymmetry in a specific flavour is weaker than the bound on the sum. This suggests that -- when flavour dynamics is included -- there is no model-independent limit on the light neutrino mass scale,and that the lower bound on the reheat temperature is relaxed by a factor ~ (3 - 10).

449 citations


Journal ArticleDOI
20 Jan 2006-Sensors
TL;DR: It is shown that the lower bound of the send-on-delta effectiveness is independent of the sampling resolution, and constitutes the built-in feature of the input signal.
Abstract: : The paper addresses the issue of the send-on-delta data collecting strategy to capture information from the environment. Send-on-delta concept is the signal-dependent temporal sampling scheme, where the sampling is triggered if the signal deviates by delta defined as the significant change of its value. It is an attractive scheme for wireless sensor networking due to effective energy consumption. The quantitative evaluations of send-on-delta scheme for a general type continuous-time bandlimited signal are presented in the paper. The bounds on the mean traffic of reports for a given signal, and assumed sampling resolution, are evaluated. Furthermore, the send-on-delta effectiveness, defined as the reduction of the mean rate of reports in comparison to the periodic sampling for a given resolution, is derived. It is shown that the lower bound of the send-on-delta effectiveness (i.e. the guaranteed reduction) is independent of the sampling resolution, and constitutes the built-in feature of the input signal. The calculation of the effectiveness for standard signals, that model the state evolution of dynamic environment in time, is exemplified. Finally, the example of send-on-delta programming is shown.

446 citations


Journal ArticleDOI
TL;DR: A new version of the quantum threshold theorem is proved that applies to concatenation of a quantum code that corrects only one error, and this theorem is used to derive arigorous lower bound on the quantum accuracy threshold e0, the best lower bound that has been rigorously proven so far.
Abstract: We prove a new version of the quantum threshold theorem that applies to concatenationof a quantum code that corrects only one error, and we use this theorem to derive arigorous lower bound on the quantum accuracy" threshold e0. Our proof also appliesto concatenation of higher-distance codes, and to noise models that allow faults to becorrelated in space and in time. The proof uses new criteria for assessing the accuracy" offault-tolerant circuits, which are particularly conducive to the inductive analysis of recur-sire simulations. Our lower bound on the threshold, e0 ≥ 2.73 × 10-5 for an adversarialindependent stochastic noise model, is derived from a computer-assisted combinatorialanaly sis; it is the best lower bound that has been rigorously proven so far.

440 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied an empirical risk minimization problem, where the goal is to obtain very general upper bounds on the excess risk of a class of measurable functions, expressed in terms of relevant geometric parameters of the class.
Abstract: Let ℱ be a class of measurable functions f:S↦[0, 1] defined on a probability space (S, $\mathcal{A}$, P). Given a sample (X1, …, Xn) of i.i.d. random variables taking values in S with common distribution P, let Pn denote the empirical measure based on (X1, …, Xn). We study an empirical risk minimization problem Pnf→min , f∈ℱ. Given a solution fn of this problem, the goal is to obtain very general upper bounds on its excess risk $$\mathcal{E}_{P}(\hat{f}_{n}):=P\hat{f}_{n}-\inf_{f\in \mathcal{F}}Pf,$$ expressed in terms of relevant geometric parameters of the class ℱ. Using concentration inequalities and other empirical processes tools, we obtain both distribution-dependent and data-dependent upper bounds on the excess risk that are of asymptotically correct order in many examples. The bounds involve localized sup-norms of empirical and Rademacher processes indexed by functions from the class. We use these bounds to develop model selection techniques in abstract risk minimization problems that can be applied to more specialized frameworks of regression and classification.

381 citations


Proceedings ArticleDOI
05 Jun 2006
TL;DR: It is demonstrated that the worst-case running time of the k-means method is superpolynomial by improving the best known lower bound from Ω(n) iterations to 2Ω(√n).
Abstract: The k-means method is an old but popular clustering algorithm known for its observed speed and its simplicity. Until recently, however, no meaningful theoretical bounds were known on its running time. In this paper, we demonstrate that the worst-case running time of k-means is superpolynomial by improving the best known lower bound from Ω(n) iterations to 2Ω(√n).

368 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a method for obtaining strict lower bound solutions using second-order cone programming (SOCP), for which efficient primal-dual interior-point algorithms have been developed.
Abstract: The formulation of limit analysis by means of the finite element method leads to an optimization problem with a large number of variables and constraints. Here we present a method for obtaining strict lower bound solutions using second-order cone programming (SOCP), for which efficient primal-dual interior-point algorithms have recently been developed. Following a review of previous work, we provide a brief introduction to SOCP and describe how lower bound limit analysis can be formulated in this way. Some methods for exploiting the data structure of the problem are also described, including an efficient strategy for detecting and removing linearly dependent constraints at the assembly stage. The benefits of employing SOCP are then illustrated with numerical examples. Through the use of an effective algorithm/software, very large optimization problems with up to 700 000 variables are solved in minutes on a desktop machine. The numerical examples concern plane strain conditions and the Mohr–Coulomb criterion, however we show that SOCP can also be applied to any other problem of lower bound limit analysis involving a yield function with a conic quadratic form (notable examples being the Drucker–Prager criterion in 2D or 3D, and Nielsen's criterion for plates). Copyright © 2005 John Wiley & Sons, Ltd.

359 citations


Journal ArticleDOI
Asmaa Abada, Sacha Davidson, F-X.J. Michaux, Marta Losada, Antonio Riotto1 
TL;DR: In this paper, the impact of flavour in thermal leptogenesis, including the quantum oscillations of the asymmetries in lepton flavour space, has been studied, and it has been shown that when flavour dynamics are included, there is no model-independent limit on the light neutrino mass scale, and that the lower bound on the reheat temperature is relaxed by a factor
Abstract: We study the impact of flavour in thermal leptogenesis, including the quantum oscillations of the asymmetries in lepton flavour space. In the Boltzmann equations we find different numerical factors and additional terms which can affect the results significantly. The upper bound on the CP asymmetry in a specific flavour is weaker than the bound on the sum. This suggests that -- when flavour dynamics is included -- there is no model-independent limit on the light neutrino mass scale,and that the lower bound on the reheat temperature is relaxed by a factor ~ (3 - 10).

347 citations


Journal ArticleDOI
TL;DR: In this article, two strategies for stabilization of discrete time linear switched systems were proposed, one of open loop nature (trajectory independent) and the other of closed loop nature based on the solution of what we call Lyapunov-Metzler inequalities.
Abstract: This paper addresses two strategies for stabilization of discrete time linear switched systems. The first one is of open loop nature (trajectory independent) and is based on the determination of an upper bound of the minimum dwell time by means of a family of quadratic Lyapunov functions. The relevant point on dwell time calculation is that the proposed stability condition does not require the Lyapunov function be uniformly decreasing at every switching time. The second one is of closed loop nature (trajectory dependent) and is designed from the solution of what we call Lyapunov–Metzler inequalities from which the stability condition is expressed. Being non-convex, a more conservative but simpler to solve version of the Lyapunov–Metzler inequalities is provided. The theoretical results are illustrated by means of examples.

Journal ArticleDOI
TL;DR: Two coding schemes that do not require the relay to decode any part of the message are investigated and it is shown that the "side-information coding scheme" can outperform the block-Markov coding scheme and the achievable rate of the side- information coding scheme can be improved via time sharing.
Abstract: Upper and lower bounds on the capacity and minimum energy-per-bit for general additive white Gaussian noise (AWGN) and frequency-division AWGN (FD-AWGN) relay channel models are established. First, the max-flow min-cut bound and the generalized block-Markov coding scheme are used to derive upper and lower bounds on capacity. These bounds are never tight for the general AWGN model and are tight only under certain conditions for the FD-AWGN model. Two coding schemes that do not require the relay to decode any part of the message are then investigated. First, it is shown that the "side-information coding scheme" can outperform the block-Markov coding scheme. It is also shown that the achievable rate of the side-information coding scheme can be improved via time sharing. In the second scheme, the relaying functions are restricted to be linear. The problem is reduced to a "single-letter" nonconvex optimization problem for the FD-AWGN model. The paper also establishes a relationship between the minimum energy-per-bit and capacity of the AWGN relay channel. This relationship together with the lower and upper bounds on capacity are used to establish corresponding lower and upper bounds on the minimum energy-per-bit that do not differ by more than a factor of 1.45 for the FD-AWGN relay channel model and 1.7 for the general AWGN model.

Journal ArticleDOI
TL;DR: It is proved that the NP-hard distinguishing substring selection problem has no polynomial time approximation schemes of running time f(1/@e)n^o^(^1^/^@e^) for any function f unless an unlikely collapse occurs in parameterized complexity theory.

01 Jan 2006
TL;DR: In this paper, a new exact algorithm for the Capacitated Vehicle Routing Problem (CVRP) based on the set partitioning formulation with additional cuts that correspond to capacity and clique inequalities is presented.
Abstract: This paper presents a new exact algorithm for the Capacitated Vehicle Routing Problem (CVRP) based on the set partitioning formulation with additional cuts that correspond to capacity and clique inequalities. The exact algorithm uses a bounding procedure that finds a near optimal dual solution of the LP-relaxation of the resulting mathematical formulation by combining three dual ascent heuristics. The first dual heuristic is based on the q-route relaxation of the set partitioning formulation of the CVRP. The second one combines Lagrangean relaxation, pricing and cut generation. The third attempts to close the duality gap left by the first two procedures using a classical pricing and cut generation technique. The final dual solution is used to generate a reduced problem containing only the routes whose reduced costs are smaller than the gap between an upper bound and the lower bound achieved. The resulting problem is solved by an integer programming solver. Computational results over the main instances from the literature show the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: An improved version of the FastICA algorithm is proposed which is asymptotically efficient, i.e., its accuracy given by the residual error variance attains the Cramer-Rao lower bound (CRB).
Abstract: FastICA is one of the most popular algorithms for independent component analysis (ICA), demixing a set of statistically independent sources that have been mixed linearly. A key question is how accurate the method is for finite data samples. We propose an improved version of the FastICA algorithm which is asymptotically efficient, i.e., its accuracy given by the residual error variance attains the Cramer-Rao lower bound (CRB). The error is thus as small as possible. This result is rigorously proven under the assumption that the probability distribution of the independent signal components belongs to the class of generalized Gaussian (GG) distributions with parameter alpha, denoted GG(alpha) for alpha>2. We name the algorithm efficient FastICA (EFICA). Computational complexity of a Matlab implementation of the algorithm is shown to be only slightly (about three times) higher than that of the standard symmetric FastICA. Simulations corroborate these claims and show superior performance of the algorithm compared with algorithm JADE of Cardoso and Souloumiac and nonparametric ICA of Boscolo on separating sources with distribution GG(alpha) with arbitrary alpha, as well as on sources with bimodal distribution, and a good performance in separating linearly mixed speech signals

Journal ArticleDOI
TL;DR: A new method is provided by introducing some free-weighting matrices and employing the lower bound of time-varying delay and based on the Lyapunov-Krasovskii functional method, sufficient condition for the asymptotical stability of the system is obtained.

Journal ArticleDOI
TL;DR: A signal intensity based maximum-likelihood target location estimator that uses quantized data is proposed for wireless sensor networks (WSNs) and is much more accurate than the heuristic weighted average methods and can reach the CRLB even with a relatively small amount of data.
Abstract: A signal intensity based maximum-likelihood (ML) target location estimator that uses quantized data is proposed for wireless sensor networks (WSNs). The signal intensity received at local sensors is assumed to be inversely proportional to the square of the distance from the target. The ML estimator and its corresponding Crameacuter-Rao lower bound (CRLB) are derived. Simulation results show that this estimator is much more accurate than the heuristic weighted average methods, and it can reach the CRLB even with a relatively small amount of data. In addition, the optimal design method for quantization thresholds, as well as two heuristic design methods, are presented. The heuristic design methods, which require minimum prior information about the system, prove to be very robust under various situations

Proceedings ArticleDOI
14 Mar 2006
TL;DR: This paper introduces variance shadow maps, a new real time shadowing algorithm that stores the mean and mean squared of a distribution of depths, from which it can efficiently compute the variance over any filter region.
Abstract: Shadow maps are a widely used shadowing technique in real time graphics. One major drawback of their use is that they cannot be filtered in the same way as color textures, typically leading to severe aliasing. This paper introduces variance shadow maps, a new real time shadowing algorithm. Instead of storing a single depth value, we store the mean and mean squared of a distribution of depths, from which we can efficiently compute the variance over any filter region. Using the variance, we derive an upper bound on the fraction of a shaded fragment that is occluded. We show that this bound often provides a good approximation to the true occlusion, and can be used as an approximate value for rendering. Our algorithm is simple to implement on current graphics processors and solves the problem of shadow map aliasing with minimal additional storage and computation.

Proceedings ArticleDOI
22 Jan 2006
TL;DR: This paper provides an almost tight classification of the possible trade-off between the amount of local information and the quality of the global solution for general covering and packing problems and gives a distributed algorithm using only small messages which obtains an (ρΔ)1/k-approximation in time O(k2).
Abstract: Achieving a global goal based on local information is challenging, especially in complex and large-scale networks such as the Internet or even the human brain. In this paper, we provide an almost tight classification of the possible trade-off between the amount of local information and the quality of the global solution for general covering and packing problems. Specifically, we give a distributed algorithm using only small messages which obtains an (ρΔ)1/k-approximation for general covering and packing problems in time O(k2), where ρ depends on the LP's coefficients. If message size is unbounded, we present a second algorithm that achieves an O(n1/k) approximation in O(k) rounds. Finally, we prove that these algorithms are close to optimal by giving a lower bound on the approximability of packing problems given that each node has to base its decision on information from its k-neighborhood.

Journal ArticleDOI
TL;DR: In this article, the authors determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero, where the lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear.
Abstract: We determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero. The lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear. A calibration to the U.S. economy suggests that policy should reduce nominal interest rates more aggressively than suggested by a model without lower bound. Rational agents anticipate the possibility of reaching the lower bound in the future and this amplifies the effects of adverse shocks well before the bound is reached. While the empirical magnitude of U.S. mark-up shocks seems too small to entail zero nominal interest rates, shocks affecting the natural real interest rate plausibly lead to a binding lower bound. Under optimal policy, however, this occurs quite infrequently and does not imply positive average inflation rates in equilibrium. Interestingly, the presence of binding real rate shocks alters the policy response to (non-binding) markup shocks.

Journal ArticleDOI
TL;DR: This paper quantifies the 'sufficient sparsity' condition, defining an equivalence breakdown point (EBP), and describes a semi-empirical heuristic for predicting the local EBP at this ensemble of 'typical' matrices with unit norm columns.

Journal ArticleDOI
TL;DR: For subsets A of the finite field Fp, p prime, a lower bound of ∆ x 1,...,xk∈A exp(2πix1... xkξ/p) where A ⊂ Fp was shown in this article.
Abstract: Our first result is a ‘sum-product’ theorem for subsets A of the finite field Fp, p prime, providing a lower bound on max(|A + A|, |A · A|). The second and main result provides new bounds on exponential sums ∑ x1,...,xk∈A exp(2πix1 . . . xkξ/p), where A ⊂ Fp.

Journal ArticleDOI
TL;DR: In this article, a general theorem providing upper bounds for the risk of an empirical risk minimizer (ERM) under margin type conditions was proposed, where the classification rules belong to some VC-class under margin conditions.
Abstract: We propose a general theorem providing upper bounds for the risk of an empirical risk minimizer (ERM).We essentially focus on the binary classification framework. We extend Tsybakov’s analysis of the risk of an ERM under margin type conditions by using concentration inequalities for conveniently weighted empirical processes. This allows us to deal with ways of measuring the “size” of a class of classifiers other than entropy with bracketing as in Tsybakov’s work. In particular, we derive new risk bounds for the ERM when the classification rules belong to some VC-class under margin conditions and discuss the optimality of these bounds in a minimax sense.

Journal ArticleDOI
TL;DR: In this paper, the abundance of the lightest (dark matter) sterile neutrinos created in the Early Universe due to active-sterile neutrino transitions from the thermal plasma was determined.
Abstract: We determine the abundance of the lightest (dark matter) sterile neutrinos created in the Early Universe due to active-sterile neutrino transitions from the thermal plasma. Our starting point is the field-theoretic formula for the sterile neutrino production rate, derived in our previous work [JHEP 06(2006)053], which allows to systematically incorporate all relevant effects, and also to analyse various hadronic uncertainties. Our numerical results differ moderately from previous computations in the literature, and lead to an absolute upper bound on the mixing angles of the dark matter sterile neutrino. Comparing this bound with existing astrophysical X-ray constraints, we find that the Dodelson-Widrow scenario, which proposes sterile neutrinos generated by active-sterile neutrino transitions to be the sole source of dark matter, is only possible for sterile neutrino masses lighter than 3.5 keV (6 keV if all hadronic uncertainties are pushed in one direction and the most stringent X-ray bounds are relaxed by a factor of two). This upper bound may conflict with a lower bound from structure formation, but a definitive conclusion necessitates numerical simulations with the non-equilibrium momentum distribution function that we derive. If other production mechanisms are also operative, no upper bound on the sterile neutrino mass can be established.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a simple proof of the Lieb-Robinson bound and use it to prove the existence of the dynamics for interactions with polynomial decay, and then use their results to demonstrate that there is an upper bound on the rate at which correlations between observables with separated support can accumulate.
Abstract: We provide a simple proof of the Lieb-Robinson bound and use it to prove the existence of the dynamics for interactions with polynomial decay. We then use our results to demonstrate that there is an upper bound on the rate at which correlations between observables with separated support can accumulate as a consequence of the dynamics.

Proceedings ArticleDOI
23 Jul 2006
TL;DR: In this article, the authors studied the problem of computing functions of values at the nodes in a network in a totally distributed manner, and proposed a distributed randomized algorithm for computing separable functions based on properties of exponential random variables.
Abstract: Motivated by applications to sensor, peer-to-peer, and ad-hoc networks, we study the problem of computing functions of values at the nodes in a network in a totally distributed manner. In particular, we consider separable functions, which can be written as linear combinations of functions of individual variables. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend in general to the computation of the actual values of separable functions.The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions based on properties of exponential random variables. We bound the running time of our algorithm in terms of the running time of an information spreading algorithm used as a subroutine by the algorithm. Since we are interested in totally distributed algorithms, we consider a randomized gossip mechanism for information spreading as the subroutine. Combining these algorithms yields a complete and simple distributed algorithm for computing separable functions.The second contribution of this paper is an analysis of the information spreading time of the gossip algorithm. This analysis yields an upper bound on the information spreading time, and therefore a corresponding upper bound on the running time of the algorithm for computing separable functions, in terms of the conductance of an appropriate stochastic matrix. These bounds imply that, for a class of graphs with small spectral gap (such as grid graphs), the time used by our algorithm to compute averages is of a smaller order than the time required for the computation of averages by a known iterative gossip scheme [5].

Journal ArticleDOI
TL;DR: A new proof technique is developed that bounds the runtime of the (μ+1) EA and investigates the stochastic process for creating family trees of individuals; the depth of these trees is bounded and the progress of the population towards the optimum is captured.
Abstract: Although Evolutionary Algorithms (EAs) have been successfully applied to optimization in discrete search spaces, theoretical developments remain weak, in particular for population-based EAs. This paper presents a first rigorous analysis of the (μ+1) EA on pseudo-Boolean functions. Using three well-known example functions from the analysis of the (1+1) EA, we derive bounds on the expected runtime and success probability. For two of these functions, upper and lower bounds on the expected runtime are tight, and on all three functions, the (μ+1) EA is never more efficient than the (1+1) EA. Moreover, all lower bounds grow with μ. On a more complicated function, however, a small increase of μ provably decreases the expected runtime drastically.This paper develops a new proof technique that bounds the runtime of the (μ+1) EA. It investigates the stochastic process for creating family trees of individuals; the depth of these trees is bounded. Thereby, the progress of the population towards the optimum is captured. This new technique is general enough to be applied to other population-based EAs.

Posted Content
TL;DR: In this article, the concept of graph energy was extended to matrices, and upper and lower bounds on matrix energy were given, extending previous results for graphs and showing that the energy of almost all graphs can be estimated.
Abstract: We extend the concept of graph energy, introduced by Gutman, to matrices. We give upper and lower bounds on matrix energy extending previous results for graphs. In particular, we estimate the energy of almost all graphs.

Journal ArticleDOI
Tong Zhang1
TL;DR: This paper establishes upper and lower bounds for some statistical estimation problems through concise information-theoretic arguments based on a simple yet general inequality, which naturally leads to a general randomized estimation method, for which performance upper bounds can be obtained.
Abstract: In this paper, we establish upper and lower bounds for some statistical estimation problems through concise information-theoretic arguments. Our upper bound analysis is based on a simple yet general inequality which we call the information exponential inequality. We show that this inequality naturally leads to a general randomized estimation method, for which performance upper bounds can be obtained. The lower bounds, applicable for all statistical estimators, are obtained by original applications of some well known information-theoretic inequalities, and approximately match the obtained upper bounds for various important problems. Moreover, our framework can be regarded as a natural generalization of the standard minimax framework, in that we allow the performance of the estimator to vary for different possible underlying distributions according to a predefined prior

Journal ArticleDOI
TL;DR: In this paper, the cell-probe lower bound for dynamic data structures has been shown to be amortized in the external-memory model without assumptions on the data structure (such as the comparison model).
Abstract: We develop a new technique for proving cell-probe lower bounds on dynamic data structures. This technique enables us to prove an amortized randomized $\Omega(\lg n)$ lower bound per operation for several data structural problems on $n$ elements, including partial sums, dynamic connectivity among disjoint paths (or a forest or a graph), and several other dynamic graph problems (by simple reductions). Such a lower bound breaks a long-standing barrier of $\Omega(\lg n\,/\lg\lg n)$ for any dynamic language membership problem. It also establishes the optimality of several existing data structures, such as Sleator and Tarjan's dynamic trees. We also prove the first $\Omega(\log_B n)$ lower bound in the external-memory model without assumptions on the data structure (such as the comparison model). Our lower bounds also give a query-update trade-off curve matched, e.g., by several data structures for dynamic connectivity in graphs. We also prove matching upper and lower bounds for partial sums when parameterized by the word size and the maximum additive change in an update.