scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 1995"


Journal ArticleDOI
TL;DR: The authors derive an upper bound on the carried traffic of connections for any routing and wavelength assignment (RWA) algorithm in a reconfigurable optical network and quantifies the amount of wavelength reuse achievable in large networks as a function of the number of wavelengths, number of edges, and number of nodes for randomly constructed networks as well as de Bruijn networks.
Abstract: Considers routing connections in a reconfigurable optical network using WDM. Each connection between a pair of nodes in the network is assigned a path through the network and a wavelength on that path, such that connections whose paths share a common link in the network are assigned different wavelengths. The authors derive an upper bound on the carried traffic of connections (or equivalently, a lower bound on the blocking probability) for any routing and wavelength assignment (RWA) algorithm in such a network. The bound scales with the number of wavelengths and is achieved asymptotically (when a large number of wavelengths is available) by a fixed RWA algorithm. The bound can be used as a metric against which the performance of different RWA algorithms can be compared for networks of moderate size. The authors illustrate this by comparing the performance of a simple shortest-path RWA (SP-RWA) algorithm via simulation relative to the bound. They also derive a similar bound for optical networks using dynamic wavelength converters, which are equivalent to circuit-switched telephone networks, and compare the two cases. Finally, they quantify the amount of wavelength reuse achievable in large networks using the SP-RWA via simulation as a function of the number of wavelengths, number of edges, and number of nodes for randomly constructed networks as well as de Bruijn networks. They also quantify the difference in wavelength reuse between two different optical node architectures. >

1,046 citations


Journal ArticleDOI
TL;DR: The authors derive a natural upper bound on the cumulative redundancy of the method for individual sequences that shows that the proposed context-tree weighting procedure is optimal in the sense that it achieves the Rissanen (1984) lower bound.
Abstract: Describes a sequential universal data compression procedure for binary tree sources that performs the "double mixture." Using a context tree, this method weights in an efficient recursive way the coding distributions corresponding to all bounded memory tree sources, and achieves a desirable coding distribution for tree sources with an unknown model and unknown parameters. Computational and storage complexity of the proposed procedure are both linear in the source sequence length. The authors derive a natural upper bound on the cumulative redundancy of the method for individual sequences. The three terms in this bound can be identified as coding, parameter, and model redundancy, The bound holds for all source sequence lengths, not only for asymptotically large lengths. The analysis that leads to this bound is based on standard techniques and turns out to be extremely simple. The upper bound on the redundancy shows that the proposed context-tree weighting procedure is optimal in the sense that it achieves the Rissanen (1984) lower bound. >

999 citations


Journal ArticleDOI
TL;DR: This paper constructs index policies that depend on the rewards from each arm only through their sample mean, and achieves a O(log n) regret with a constant that is based on the Kullback–Leibler number.
Abstract: We consider a non-Bayesian infinite horizon version of the multi-armed bandit problem with the objective of designing simple policies whose regret increases sldwly with time. In their seminal work on this problem, Lai and Robbins had obtained a O(logn) lower bound on the regret with a constant that depends on the KullbackLeibler number. They also constructed policies for some specific families of probability distributions (including exponential families) that achieved the lower bound. In this paper we construct index policies that depend on the rewards from each arm only through their sample mean. These policies are computationally much simpler and are also applicable much more generally. They achieve a O(logn) regret with a constant that is also based on the Kullback-Leibler number. This constant turns out to be optimal for one-parameter exponential families; however, in general it is derived from the optimal one via a 'contraction' principle. Our results rely entirely on a few key lemmas from the theory of large deviations.

660 citations


Journal ArticleDOI
TL;DR: This lemma is a general “Localization Lemma” that reduces integral inequalities over then-dimensional space to integral inequalities in a single variable and is illustrated by showing how a number of well-known results can be proved using it.
Abstract: We study the smallest number ?(K) such that a given convex bodyK in ?n can be cut into two partsK1 andK2 by a surface with an (n?1)-dimensional measure ?(K) vol(K1)·vol(K2)/vol(K). LetM1(K) be the average distance of a point ofK from its center of gravity. We prove for the "isoperimetric coefficient" that % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfKttLearuqr1ngBPrgarmWu51MyVXgatC% vAUfeBSjuyZL2yd9gzLbvyNv2CaeHbd9wDYLwzYbItLDharyavP1wz% ZbItLDhis9wBH5garqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbb% L8F4rqqrFfpeea0xe9Lq-Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaqpe% pae9pg0FirpepeKkFr0xfr-xfr-xb9adbaqaaeGaciGaaiaabeqaam% aaeaqbaaGcbaqegWuDJLgzHbYqV52CVXwzaGGbciaa-H8acqGGOaak% cqWGlbWscqGGPaqkcqGHLjYSdaWcaaqaaiGbcYgaSjabc6gaUjabik% daYaqaaiabd2eannaaBaaaleaacqaIXaqmaeqaaOGaeiikaGIaem4s% aSKaeiykaKcaaaaa!4EFC! $$\psi (K) \geqslant \frac{{\ln 2}}{{M_1 (K)}}$$ , and give other upper and lower bounds. We conjecture that our upper bound is the exact value up to a constant. Our main tool is a general "Localization Lemma" that reduces integral inequalities over then-dimensional space to integral inequalities in a single variable. This lemma was first proved by two of the authors in an earlier paper, but here we give various extensions and variants that make its application smoother. We illustrate the usefulness of the lemma by showing how a number of well-known results can be proved using it.

489 citations


Journal ArticleDOI
TL;DR: In this paper, the Dirichlet Laplacian on curved tubes of a constant cross section in two and three dimensions is investigated, and it is shown that if the tube is non-straight and its curvature vanishes asymptotically, there is always a bound state below the bottom of the essential spectrum.
Abstract: Dirichlet Laplacian on curved tubes of a constant cross section in two and three dimensions is investigated. It is shown that if the tube is non-straight and its curvature vanishes asymptotically, there is always a bound state below the bottom of the essential spectrum. An upper bound to the number of these bound states in thin tubes is derived. Furthermore, if the tube is only slightly bent, there is just one bound state; we derive its behaviour with respect to the bending angle. Finally, perturbation theory of these eigenvalues in any thin tube with respect to the tube radius is constructed and some open questions are formulated.

469 citations


Journal ArticleDOI
TL;DR: An upper bound is obtained, which is independent of d, for the number, n(?, d), of points for which discrepancy is at most ?, n (?, d) ? 7.26??2.454, ?d, ? ? 1.

382 citations


Journal ArticleDOI
TL;DR: An upper bound for the entropy is established, based on the eigenvalue interlacing property, and incorporated in a branch-and-bound algorithm for the exact solution of the experimental design problem of selecting a most informative subset, having prespecified size, from a set of correlated random variables.
Abstract: We study the experimental design problem of selecting a most informative subset, having prespecified size, from a set of correlated random variables. The problem arises in many applied domains, such as meteorology, environmental statistics, and statistical geology. In these applications, observations can be collected at different locations, and possibly, at different times. Information is measured by “entropy.” In the Gaussian case, the problem is recast as that of maximizing the determinant of the covariance matrix of the chosen subset. We demonstrate that this problem is NP-hard. We establish an upper bound for the entropy, based on the eigenvalue interlacing property, and we incorporate this bound in a branch-and-bound algorithm for the exact solution of the problem. We present computational results for estimated covariance matrices that correspond to sets of environmental monitoring stations in the United States.

360 citations


Journal ArticleDOI
TL;DR: In this article, a method for computing rigorous upper bounds under plane strain conditions is described, based on a linear three-noded triangular element, which has six unknown nodal velocities and a fixed number of unknown multiplier rates, and uses the kinematic theorem to define a kinematically admissible velocity field as the solution of a linear programming problem.

357 citations


Journal ArticleDOI
TL;DR: It is proved that the work function algorithm for the k-server problem has a competitive ratio at most 2-k, and a duality lemma that uses quasiconvexity to characterize the configuration that achieve maximum increase of the workfunction.
Abstract: We prove that the work function algorithm for the k-server problem has a competitive ratio at most 2k−1. Manasse et al. [1988] conjectured that the competitive ratio for the k-server problem is exactly k (it is trivially at least k); previously the best-known upper bound was exponential in k. Our proof involves three crucial ingredients: A quasiconvexity property of work functions, a duality lemma that uses quasiconvexity to characterize the configuration that achieve maximum increase of the work function, and a potential function that exploits the duality lemma.

325 citations


Journal ArticleDOI
TL;DR: In this paper, the authors re-examine the lower bound on the mass of the Higgs boson, Mh, from standard model stability including next-to-leading-log radiative corrections, and show that the bound is O (10 GeV) less stringent than in previous estimates.

318 citations


Journal ArticleDOI
TL;DR: In this article, a Bayesian version of the Cramer-Rao lower bound due to van trees was used to give an elementary proof that the limiting distibution of any regular estimator cannot have a variance less than the classical information bound, under minimal regularity conditions.
Abstract: We use a Bayesian version of the Cramer-Rao lower bound due to van Trees to give an elementary proof that the limiting distibution of any regular estimator cannot have a variance less than the classical information bound, under minimal regularity conditions. We also show how minimax convergence rates can be derived in various non- and semi-parametric problems from the van Trees inequality. Finally we develop multivariate versions of the inequality and give applications.

Journal ArticleDOI
TL;DR: This paper deals with the global optimization of networks consisting of splitters, mixers and linear process units and that involve multicomponent streams and shows that only a few nodes are commonly required in the branch and bound search.

Journal ArticleDOI
17 Sep 1995
TL;DR: A new family of maximum distance separable (MDS) array codes is presented, and it is shown that the upper bound obtained from these codes is close to the lower bound and, most importantly, does not depend on the size of the code symbols.
Abstract: A new family of maximum distance separable (MDS) array codes is presented. The code arrays contain p information columns and r independent parity columns, each column consisting of p-1 bits, where p is a prime. We extend a previously known construction for the case r=2 to three and more parity columns. It is shown that when r=3 such extension is possible for any prime p. For larger values of r, we give necessary and sufficient conditions for our codes to be MDS, and then prove that if p belongs to a certain class of primes these conditions are satisfied up to r/spl les/8. One of the advantages of the new codes is that encoding and decoding may be accomplished using simple cyclic shifts and XOR operations on the columns of the code array. We develop efficient decoding procedures for the case of two- and three-column errors. This again extends the previously known results for the case of a single-column error. Another primary advantage of our codes is related to the problem of efficient information updates. We present upper and lower bounds on the average number of parity bits which have to be updated in an MDS code over GF (2/sup m/), following an update in a single information bit. This average number is of importance in many storage applications which require frequent updates of information. We show that the upper bound obtained from our codes is close to the lower bound and, most importantly, does not depend on the size of the code symbols.

Proceedings ArticleDOI
29 May 1995
TL;DR: This paper considers two-party communication complexity, the ``asymmetric case'', when the input sizes of the two players differ significantly, and derives two generally applicable methods of proving lower bounds and obtain several applications.
Abstract: In this paper we consider two-party communication complexity, the ``asymmetric case'', when the input sizes of the two players differ significantly. Most of previous work on communication complexity only considers the total number of bits sent, but we study trade-offs between the number of bits the first player sends and the number of bits the second sends. These types of questions are closely related to the complexity of static data structure problems in the cell probe model. We derive two generally applicable methods of proving lower bounds and obtain several applications. These applications include new lower bounds for data structures in the cell probe model. Of particular interest is our ``round elimination'' lemma, which is interesting also for the usual symmetric communication case. This lemma generalizes and abstracts in a very clean form the ``round reduction'' techniques used in many previous lower bound proofs. ] 1998 Academic Press

Journal ArticleDOI
19 Apr 1995
TL;DR: A method for determining an upper bound on end-to-end delay bounds for sources conforming to Leaky Bucket and exponentially bounded burstiness is presented.
Abstract: We define a class of Guaranteed Rate (GR) scheduling algorithms. The GR class includes Virtual Clock, Packet-by-Packet Generalized Processor Sharing and Self-Clocked Fair Queuing. For networks that employ scheduling algorithms belonging to GR, we present a method for determining an upper bound on end-to-end delay. The method facilitates determination of end-to-end delay bounds for a Variety of sources. We illustrate the method by determining end-to-end delay bounds for sources conforming to Leaky Bucket and exponentially bounded burstiness.

Journal ArticleDOI
TL;DR: Theorem 6.2 of Montgomery and Gallagher as discussed by the authors follows the 6.1 of Montgomery [6] and from the large sieve in the form due to Gallagher [2], for example, in the sense that the upper bound is composed of two terms, the first reflecting the long term average and the second reflecting the contribution of a single point where the Dirichlet polynomial is large.
Abstract: where Σ∗ indicates summation over primitive characters only. These last two bounds follow respectively from Theorem 6.2 of Montgomery [6] and from the large sieve in the form due to Gallagher [2], for example. In each case one may interpret the upper bound as being composed of two terms, the first of which reflects the long term average, and the second of which reflects the contribution of a single point where the Dirichlet polynomial is large. Thus for example one has ∫ T

Journal ArticleDOI
TL;DR: In this article, the problem of estimating the set G from a sample of i.i.d. observations uniformly distributed in G is considered and an estimator which is asymptotically efficient in the minimax sense is proposed.
Abstract: Let g: [0,1] --> [0,1] be a monotone nondecreasing function and let G be the closure of the set {(x, y) is an element of [0,1] X [0,1]: 0 less than or equal to y less than or equal to g(x)}. We consider the problem of estimating the set G from a sample of i.i.d. observations uniformly distributed in G. The estimation error is measured in the Hausdorff metric. We propose the estimator which is asymptotically efficient in the minimax sense.

01 Jan 1995
TL;DR: In this article, the capacity of the discrete-time quadrature additive Gaussian channel (QAGC) with inputs subjected to (normalized) average and peak power constraints, pa and pp respectively, is considered.
Abstract: Abstruct-The capacity C(p,, pp) of the discrete-time quadrature additive Gaussian channel (QAGC) with inputs subjected to (normalized) average and peak power constraints, pa and pp respectively, is considered. By generalizing Smith’s results for the scalar average and peak-power-constrained Gaussian channel, it is shown that the capacity achieving distribution is discrete in amplitude (envelope), having a finite number of mass-points, with a uniformly distributed independent phase and it is geometrically described by concentric circles. It is shown that with peak power being solely the effective constraint, a constant envelope with uniformly distributed phase input is capacity achieving for pp 5 7.8 (dB) (4.8 (dB) per dimension). The capacity under a peak-power constraint is evaluated for a wide range of pp, by incorporating the theoretical observations into a nonlinear dynamic programming procedure. Closed-form expressions for the asymptotic (low and large pa and pp) capacity and the corresponding capacity achieving distribution and for lower and upper bounds on the capacity C(p,, pp) are developed. The capacity C( pa, pp) provides an improved ultimate upper bound on the reliable information rates transmitted over the QAGC with any communication systems subjected to both average and peak-power limitations, when compared to the classical Shannon formula for the capacity of the QAGC which does not account for the peak-power constraint. This is in particular important for systems that operate with restrictive (close to 1) average-to-peak power ratio palpp and at moderate power values.

Proceedings ArticleDOI
01 Jan 1995
TL;DR: It is proved that the Elmore delay is an absolute upper bound on the 50% delay of an RC tree response and that this bound holds for input signals other than steps, and that the actual delay asymptotically approaches theElmore delay as the input signal rise time increases.
Abstract: The Elmore delay is an extremely popular delay metric, particularly for RC tree analysis. The widespread usage of this metric is mainly attributable to it being the most accurate delay measure that is a simple analytical function of the circuit parameters. The only drawbacks to this delay metric are the uncertainty as to whether it is an optimistic or a pessimistic estimate, and the restriction to step response delay estimation. In this paper, we prove that the Elmore delay is an absolute upper bound on the 50% delay of an RC tree response. Moreover, we prove that this bound holds for input signals other than steps, and that the actual delay asymptotically approaches the Elmore delay as the input signal rise time increases. A lower bound on the delay is also developed using the Elmore delay and the second moment of the impulse response. The utility of this bound is for understanding the accuracy and the limitations of the Elmore delay metric as we use it for design automation.

Journal ArticleDOI
TL;DR: A new and efficient algorithm for computing the sparse resultant of a system of n + 1 polynomial equations in n unknowns that produces a matrix whose entries are coefficients of the given polynomials and is typically smaller than the matrices obtained by previous approaches.

Journal ArticleDOI
TL;DR: This paper develops filters with an optimized upper bound for the error variance for both finite and infinite horizon filtering problems.
Abstract: This paper deals with the robust minimum variance filtering problem for linear systems subject to norm-bounded parameter uncertainty in both the state and the output matrices of the state-space model. The problem addressed is the design of linear filters having an error variance with a guaranteed upper bound for any allowed uncertainty. Two methods for designing robust filters are investigated. The first one deals with constant parameter uncertainty and focuses on the design of steady-state filters that yield an upper bound to the worst-case asymptotic error variance. This bound depends on an upper bound for the power spectrum density of a signal at a specific point in the system, and it can be made tighter if a tight bound on the latter power spectrum can be obtained. The second method allows for time-varying parameter uncertainty and for general time-varying systems and is more systematic. We develop filters with an optimized upper bound for the error variance for both finite and infinite horizon filtering problems.

Journal ArticleDOI
TL;DR: This paper studies the information rate of secret sharing schemes for-access structures based on graphs, which measures how much information in being distributed as shares compared with the size of the secret key, and the average information rate, which is the ratio between the secret size and the arithmetic mean of the size the shares.
Abstract: In this paper we continue a study of secret sharing schemes for-access structures based on graphs. Given a graph G, we require that a subset of participants can compute a secret key if they contain an edge of G; otherwise, they can obtain no information regarding the key. We study the information rate of such schemes, which measures how much information in being distributed as shares compared with the size of the secret key, and the average information rate, which is the ratio between the secret size and the arithmetic mean of the size of the shares. We give both upper and lower bounds on the optimal information rate and average information rate that can be obtained. Upper bounds arise by applying entropy arguments due to Capocelli et al. [15]. Lower bounds come from constructions that are based on graph decompositions. Application of these constructions requires solving a particular linear programming problem. We prove some general results concerning the information rate and average information rate for paths, cycles, and trees. Also, we study the 30 (connected) graphs on at most five vertices, obtaining exact values for the optimal information rate in 26 of the 30 cases, and for the optimal average information rate in 28 of the 30 cases.

Journal ArticleDOI
TL;DR: In this paper, the entropic uncertainty relation for sets of N + 1 complementary observables A k in N -dimensional Hilbert space is sharpened to Σ k H(A k ) ≥ 1 2 N ln (1 2 N ) + ( 1 2 n + 1) ln [ 1 2 (N + 1)] for even N.

Journal ArticleDOI
TL;DR: In this article, the authors discuss infinite time ruin probabilities in continuous time in a compound Poisson process with a constant premium rate and a constant interest rate, and discuss equations for the ruin probability as well as approximations and upper and lower bounds.
Abstract: In the present paper we discuss infinite time ruin probabilities in continuous time in a compound Poisson process with a constant premium rate and a constant interest rate. We discuss equations for the ruin probability as well as approximations and upper and lower bounds. Two special cases are treated in more detail: the case with zero initial reserve, and the case with exponential claim sizes.

Journal ArticleDOI
TL;DR: Universal bounds for the cardinality of codes in the Hamming space F/sub r//sup n/ with a given minimum distance d and/or dual distance d' are stated and a self-contained proof of optimality of these bounds in the framework of the linear programming method is given.
Abstract: Universal bounds for the cardinality of codes in the Hamming space F/sub r//sup n/ with a given minimum distance d and/or dual distance d' are stated. A self-contained proof of optimality of these bounds in the framework of the linear programming method is given. The necessary and sufficient conditions for attainability of the bounds are found. The parameters of codes satisfying these conditions are presented in a table. A new upper bound for the minimum distance of self-dual codes and a new lower bound for the crosscorrelation of half-linear codes are obtained. >

Posted Content
TL;DR: The overall conclusion is that almost all problems are hard to solve with quantum circuits, including decision problem and guess checkable functions.
Abstract: In a recent preprint by Deutsch et al. [1995] the authors suggest the possibility of polynomial approximability of arbitrary unitary operations on $n$ qubits by 2-qubit unitary operations. We address that comment by proving strong lower bounds on the approximation capabilities of g-qubit unitary operations for fixed g. We consider approximation of unitary operations on subspaces as well as approximation of states and of density matrices by quantum circuits in several natural metrics. The ability of quantum circuits to probabilistically solve decision problem and guess checkable functions is discussed. We also address exact unitary representation by reducing the upper bound by a factor of n^2 and by formalizing the argument given by Barenco et al. [1995] for the lower bound. The overall conclusion is that almost all problems are hard to solve with quantum circuits.

Journal ArticleDOI
TL;DR: A pattern-independent, linear time algorithm (iMax) that estimates at every contact point, an upper bound envelope of all possible current waveforms that result by the application of different input patterns to the circuit is proposed.
Abstract: Currents flowing in the power and ground (P&G) buses of CMOS digital circuits affect both circuit reliability and performance by causing excessive voltage drops. Excessive voltage drops manifest themselves as glitches on the P&G buses and cause erroneous logic signals and degradation in switching speeds. Maximum current estimates are needed at every contact point in the buses to study the severity of the voltage drop problems and to redesign the supply lines accordingly. These currents, however, depend on the specific input patterns that are applied to the circuit. Since it is prohibitively expensive to enumerate all possible input patterns, this problem has, for a long time, remained largely unsolved. In this paper, we propose a pattern-independent, linear time algorithm (iMax) that estimates at every contact point, an upper bound envelope of all possible current waveforms that result by the application of different input patterns to the circuit. The algorithm is extremely efficient and produces good results for most circuits as is demonstrated by experimental results on several benchmark circuits. The accuracy of the algorithm can be further improved by resolving the signal correlations that exist inside a circuit. We also present a novel partial input enumeration (PIE) technique to resolve signal correlations and significantly improve the upper bounds for circuits where the bounds produced by iMax are not tight. We establish with extensive experimental results that these algorithms represent a good time-accuracy trade-off and are applicable to VLSI circuits. >

Journal ArticleDOI
TL;DR: This paper presents several natural and realistic ways of modeling the inaccuracies in the distance data, and considers various ways of “fitting” a given distance matrix to a tree in order to minimize various criteria of error in the fit.
Abstract: Constructing evolutionary trees for species sets is a fundamental problem in computational biology. One of the standard models assumes the ability to compute distances between every pair of species, and seeks to find an edge-weighted treeT in which the distanced in the tree between the leaves ofT corresponding to the speciesi andj exactly equals the observed distance,d ij . When such a tree exists, this is expressed in the biological literature by saying that the distance function or matrix isadditive, and trees can be constructed from additive distance matrices in0(n 2) time. Real distance data is hardly ever additive, and we therefore need ways of modeling the problem of finding the best-fit tree as an optimization problem. In this paper we present several natural and realistic ways of modeling the inaccuracies in the distance data. In one model we assume that we have upper and lower bounds for the distances between pairs of species and try to find an additive distance matrix between these bounds. In a second model we are given a partial matrix and asked to find if we can fill in the unspecified entries in order to make the entire matrix additive. For both of these models we also consider a more restrictive problem of finding a matrix that fits a tree which is not only additive but alsoultrametric. Ultrametric matrices correspond to trees which can be rooted so that the distance from the root to any leaf is the same. Ultrametric matrices are desirable in biology since the edge weights then indicate evolutionary time. We give polynomial-time algorithms for some of the problems while showing others to be NP-complete. We also consider various ways of “fitting” a given distance matrix (or a pair of upper- and lower-bound matrices) to a tree in order to minimize various criteria of error in the fit. For most criteria this optimization problem turns out to be NP-hard, while we do get polynomial-time algorithms for some.

Journal ArticleDOI
TL;DR: A linear time on-line algorithm is proposed for which the expected difference between the optimum and the approximate solution value is O(log3/2n), and anΩ(1) lower bound on the expected Difference between the optimal and the solution found by any on- line algorithm is shown to hold.
Abstract: Different classes of on-line algorithms are developed and analyzed for the solution of {0, 1} and relaxed stochastic knapsack problems, in which both profit and size coefficients are random variables. In particular, a linear time on-line algorithm is proposed for which the expected difference between the optimum and the approximate solution value isO(log3/2 n). AnΩ(1) lower bound on the expected difference between the optimum and the solution found by any on-line algorithm is also shown to hold.

Journal ArticleDOI
TL;DR: In this paper, a precise form of the quantum-mechanical time-energy uncertainty relation is derived, and upper and lower bounds for the probability of finding the system in a state in a subspace at a later or earlier time are derived.
Abstract: A precise form of the quantum-mechanical time-energy uncertainty relation is derived. For any given initial state (density operator), time-dependent Hamiltonian, and subspace of reference states, it gives upper and lower bounds for the probability of finding the system in a state in that subspace at a later or earlier time. The bounds involve only the initial data, the energy uncertainty in the initial state, and the energy uncertainty in the reference subspace. They describe how fast the state enters or leaves the reference subspace. They are exact if, but not only if, the initial state or the projection onto the reference subspace commutes with the Hamiltonian. The basic tool used in the proof is a simple inequality for expectation values of commutators, which generalizes the usual uncertainty relation. By introducing suitable comparison dynamics (trial propagators), the bounds can be made arbitrarily tight. They represent a time-dependent variational principle, in terms of trial propagators, which provides explicit error estimates and reproduces the exact time evolution when one varies over all trial propagators. As illustrations, we derive accurate lower bounds on the escape time of a particle out of a potential well modeling a quantum dot, and the total time before which more » a He{sup +} ion moving in a uniform magnetic field loses its electron. « less