scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 2001"


Journal ArticleDOI
TL;DR: A generalized reduction that is based on an algorithm that represents an arbitrary k-CNF formula as a disjunction of 2?nk-C NF formulas that are sparse, that is, each disjunct has O(n) clauses, and shows that Circuit-SAT is SERF-complete for all NP-search problems.

1,410 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that the bound from the electroweak data on the size of extra dimensions accessible to all the standard model elds is rather loose, and that these extra dimensions could have a compactication scale as low as 300 GeV for one extra dimension.
Abstract: We show that the bound from the electroweak data on the size of extra dimensions accessible to all the standard model elds is rather loose. These \universal" extra dimensions could have a compactication scale as low as 300 GeV for one extra dimension. This is because the Kaluza-Klein number is conserved and thus the contributions to the electroweak observables arise only from loops. The main constraint comes from weak-isospin violation eects. We also compute the contributions to the S parameter and the Zb b vertex. The direct bound on the compactication scale is set by CDF and D0 in the few hundred GeV range, and the Run II of the Tevatron will either discover extra dimensions or else it could signicantly raise the bound on the compactication scale. In the case of two universal extra dimensions, the current lower bound on the compactication scale depends logarithmically on the ultra-violet cuto of the higher dimensional theory, but can be estimated to lie between 400 and 800 GeV. With three or more extra dimensions, the cuto dependence may be too strong to allow an estimate.

1,229 citations


Journal ArticleDOI
29 Jun 2001
TL;DR: An efficient numerical algorithm to compute the optimal input distribution that maximizes the sum capacity of a Gaussian multiple-access channel with vector inputs and a vector output is proposed.
Abstract: This paper proposes an efficient numerical algorithm to compute the optimal input distribution that maximizes the sum capacity of a Gaussian multiple-access channel with vector inputs and a vector output. The numerical algorithm has an iterative water-filling interpretation. The algorithm converges from any starting point, and it reaches within 1/2 nats per user per output dimension from the sum capacity after just one iteration. The characterization of sum capacity also allows an upper bound and a lower bound for the entire capacity region to be derived.

1,128 citations


Proceedings ArticleDOI
11 Jun 2001
TL;DR: It is shown that in all data gathering scenarios presented, there exist networks which achieve lifetimes equal to or >95% of the derived bounds, depending on the scenario, and bounds are either tight or near-tight.
Abstract: We ask a fundamental question concerning the limits of energy efficiency of sensor networks-what is the upper bound on the lifetime of a sensor network that collects data from a specified region using a certain number of energy-constrained nodes? The answer to this question is valuable for two main reasons. First, it allows calibration of real world data-gathering protocols and an understanding of factors that prevent these protocols from approaching fundamental limits. Secondly, the dependence of lifetime on factors like the region of observation, the source behavior within that region, basestation location, number of nodes, radio path loss characteristics, efficiency of node electronics and the energy available on a node, is exposed. This allows architects of sensor networks to focus on factors that have the greatest potential impact on network lifetime. By employing a combination of theory and extensive simulations of constructed networks, we show that in all data gathering scenarios presented, there exist networks which achieve lifetimes equal to or >95% of the derived bounds. Hence, depending on the scenario, our bounds are either tight or near-tight.

693 citations


Proceedings ArticleDOI
06 Jul 2001
TL;DR: Several consequences of this algorithm for related problems on lattices and codes are obtained, including an improvement for polynomial time approximations to the shortest vector problem.
Abstract: We present a randomized 2^{O(n)} time algorithm to compute a shortest non-zero vector in an n-dimensional rational lattice. The best known time upper bound for this problem was 2^{O(n\log n)} first given by Kannan [7] in 1983. We obtain several consequences of this algorithm for related problems on lattices and codes, including an improvement for polynomial time approximations to the shortest vector problem. In this improvement we gain a factor of log log n in the exponent of the approximating factor.

602 citations


Proceedings ArticleDOI
11 Jun 2001
TL;DR: An upper bound on the capacity that can be expressed as the sum of the logarithms of ordered chi-square-distributed variables is derived and evaluated analytically and compared to the results obtained by Monte Carlo simulations.
Abstract: We consider the capacity of multiple-input-multiple-output (MIMO) systems with reduced complexity. One link end uses all available antennas, while the other chooses the "best" L out of N antennas. As "best", we use those antennas that maximize capacity. We derive an upper bound on the capacity that can be expressed as the sum of the logarithms of ordered chi-squared variables. This bound is then evaluated analytically, and compared to results from Monte Carlo simulations. As long as L is at least as large as the number of antennas at the other link end, the achieved capacity is close to the capacity of a full-complexity system. We demonstrate, for example, that for L=3, N=8 at the receiver, and 3 antennas at the transmitter, the capacity of the reduced-complexity scheme is 20 bits/s/Hz compared to 23 bits/s/Hz of a full-complexity scheme.

557 citations


Journal ArticleDOI
TL;DR: In this paper, a general upper bound on the quantum capacity of a one-mode Gaussian channel with attenuation or amplification and classical noise was derived. But the bounds were not explicitly evaluated for the case of a single-mode channel.
Abstract: We show how to compute or at least to estimate various capacity-related quantities for bosonic Gaussian channels. Among these are the coherent information, the entanglement-assisted classical capacity, the one-shot classical capacity, and a quantity involving the transpose operation, shown to be a general upper bound on the quantum capacity, even allowing for finite errors. All bounds are explicitly evaluated for the case of a one-mode channel with attenuation or amplification and classical noise.

476 citations


Journal ArticleDOI
TL;DR: In this article, a general algorithm for constructing upper and lower bounds on the true price of the option using any approximation to the option price is presented, which is made feasible by the representation of the American option price as a solution of a properly defined dual minimization problem.
Abstract: We develop a new method for pricing American options. The main practical contribution of this paper is a general algorithm for constructing upper and lower bounds on the true price of the option using any approximation to the option price. We show that our bounds are tight, so that if the initial approximation is close to the true price of the option, the bounds are also guaranteed to be close. We also explicitly characterize the worst-case performance of the pricing bounds. The computation of the lower bound is straightforward and relies on simulating the suboptimal exercise strategy implied by the approximate option price. The upper bound is also computed using Monte Carlo simulation. This is made feasible by the representation of the American option price as a solution of a properly defined dual minimization problem, which is the main theoretical result of this paper. Our algorithm proves to be accurate on a set of sample problems where we price call options on the maximum and the geometric mean of a collection of stocks. These numerical results suggest that our pricing method can be successfully applied to problems of practical interest.

405 citations


Journal ArticleDOI
TL;DR: A new approach in a posteriori error estimation is studied, in which the numerical error of finite element approximations is estimated in terms of quantities of interest rather than the classical energy norm.
Abstract: In this paper, we study a new approach in a posteriori error estimation, in which the numerical error of finite element approximations is estimated in terms of quantities of interest rather than the classical energy norm. These so-called quantities of interest are characterized by linear functionals on the space of functions to where the solution belongs. We present here the theory with respect to a class of elliptic boundary-value problems, and in particular, show how to obtain accurate estimates as well as upper and lower bounds on the error. We also study the new concept of goal-oriented adaptivity, which embodies mesh adaptation procedures designed to control error in specific quantities. Numerical experiments confirm that such procedures greatly accelerate the attainment of local features of the solution to preset accuracies as compared to traditional adaptive schemes based on energy norm error estimates.

370 citations


Journal ArticleDOI
TL;DR: The influence of constraints (chemical prior knowledge) on spectral parameters of the peaks of doublets is demonstrated and the inherent benefits for quantitation are shown.
Abstract: The Cramer-Rao lower bounds (CRBs) are the lowest possible standard deviations of all unbiased model parameter estimates obtained from the data. Consequently they give insight into the potential performance of quantitation estimators. Using analytical CRB expressions for spectral parameters of singlets and doublets in noise, one is able to judge the precision as a function of spectral and experimental parameters. We point out the usefulness of these expressions for experimental design. The influence of constraints (chemical prior knowledge) on spectral parameters of the peaks of doublets is demonstrated and the inherent benefits for quantitation are shown. Abbreviations used: CRB Cramer-Rao lower bounds

326 citations


Journal ArticleDOI
TL;DR: In this article, an alternative mixed integer linear disjunctive formulation was proposed, which has better conditioning properties than the standard nonlinear mixed integer formulation, where an upper bound provided by a heuristic solution is used to reduce the tree search.
Abstract: The classical nonlinear mixed integer formulation of the transmission network expansion problem cannot guarantee finding the optimal solution due to its nonconvex nature. We propose an alternative mixed integer linear disjunctive formulation, which has better conditioning properties than the standard disjunctive model. The mixed integer program is solved by a commercial branch and bound code, where an upper bound provided by a heuristic solution is used to reduce the tree search. The heuristic solution is obtained using a GRASP metaheuristic, capable of finding sub-optimal solutions with an affordable computing effort. Combining the upper bound given by the heuristic and the mixed integer disjunctive model, optimality can be proven for several hard problem instances.

Proceedings ArticleDOI
14 Oct 2001
TL;DR: A new notion of informational complexity is introduced which is related to SM complexity and has nice direct sum properties and appears to be quite powerful and may be of independent interest.
Abstract: Given m copies of the same problem, does it take m times the amount of resources to solve these m problems? This is the direct sum problem, a fundamental question that has been studied in many computational models. We study this question in the simultaneous message (SM) model of communication introduced by A.C. Yao (1979). The equality problem for n-bit strings is well known to have SM complexity /spl Theta/(/spl radic/n). We prove that solving m copies of the problem has complexity /spl Omega/(m/spl radic/n); the best lower bound provable using previously known techniques is /spl Omega/(/spl radic/(mn)). We also prove similar lower bounds on certain Boolean combinations of multiple copies of the equality function. These results can be generalized to a broader class of functions. We introduce a new notion of informational complexity which is related to SM complexity and has nice direct sum properties. This notion is used as a tool to prove the above results; it appears to be quite powerful and may be of independent interest.

Journal ArticleDOI
TL;DR: A fully polynomial approximation scheme for the restricted shortest path problem, which improves Hassin's original result and achieves time complexity of O(|E|n(loglog(UB/LB)+1/@e)), where UB and LB are upper and lower bounds for the problem.

Journal ArticleDOI
Don Zagier1
01 Sep 2001-Topology
TL;DR: In this article, it was shown that the coefficients of the Taylor expansion at q = 1 are equal to the numbers ξD of regular linearized chord diagrams as defined by Stoimenow and hence give an upper bound for the number of linearly independent Vassiliev invariants of degree D. The same values and derivatives of all orders at all roots of unity are obtained as the limiting value of the function − 1 2 ∑ n∈ Z (−1) n |6n+1|q (3n 2 +n)/2, the "der


Journal ArticleDOI
TL;DR: In this article, the authors elucidate the physical basis for the upper bound on high energy neutrino fluxes implied by the observed cosmic ray flux, which is valid for neutrinos produced either by $p,\ensuremath{-}p(n)$ reactions in sources which are optically thin for high energy protons.
Abstract: We elucidate the physical basis for the upper bound on high energy neutrino fluxes implied by the observed cosmic ray flux. We stress that the bound is valid for neutrinos produced either by $p,\ensuremath{\gamma}$ reactions or by $p\ensuremath{-}p(n)$ reactions in sources which are optically thin for high energy protons to photo-meson and nucleon-meson interactions. We show that the upper bound is robust and conservative. The Waxman-Bahcall bound overestimates the most likely neutrino flux by a factor $\ensuremath{\sim}5/\ensuremath{\tau},$ for small optical depths $\ensuremath{\tau}.$ The upper limit cannot be plausibly evaded by invoking magnetic fields, optically thick active galactic nuclei (AGNs), or large hidden fluxes of extragalactic protons. We describe the implications of the bound for future experiments including the AMANDA, ANTARES, Auger, ICECUBE, NESTOR, and OWL/AIRWATCH detectors.

Proceedings ArticleDOI
09 Jan 2001
TL;DR: A broadcast technique is introduced that exploits selective families in a new way and a new, rather surprising insight is obtained into the real gap between deterministic and randomized protocols.
Abstract: Selective families, a weaker variant of superimposed codes [KS64, F92, 197, CR96], have been recently used to design Deterministic Distributed Broadcast (DDB) protocols for unknown radio networks (a radio network is said to be unknown when the nodes know nothing about the network but their own label) [CGGPR00, CGOR00].We first provide a general almost tight lower bound on the size of selective families. Then, by reverting the selective families - DDB protocols connection, we exploit our lower bound to construct a family of “hard” radio networks (i.e. directed graphs). These networks yield an O(n log D) lower bound on the completion time of DDB protocols that is superlinear (in the size n of the network) even for very small maximum eccentricity D of the network, while all the previous lower bounds (e.g. O(D log n) [CGGPR00]) are superlinear only when D is almost linear.On the other hand, the previous upper bounds are all superlinear in n independently of the eccentricity D and the maximum in-degree d of the network. We introduce a broadcast technique that exploits selective families in a new way. Then, by combining selective families of almost optimal size with our new broadcast technique, we obtain an O(Dd log3n) upper bound that we prove to be almost optimal when d = O(n/D). This exponentially improves over the best known upper bound [CGR00) when D, d = O(polylogn). Furthermore, by comparing our deterministic upper bound with the best known randomized one [BGI87] we obtain a new, rather surprising insight into the real gap between deterministic and randomized protocols. It turns out that this gap is exponential (as discovered in [BGI87]), but only when the network has large maximum in-degree (i.e. d = O(na), for some constant a > O).We then look at the multibroadcast problem on unknown radio networks. A similar connection to that between selective families and (single) broadcast also holds between superimposed codes and multibroadcast. We in fact combine a variant of our (single) broadcast technique with superimposed codes of almost optimal size available in literature [EFF85, HS87, I97, CHI99]. This yields a multibroadcast protocol having completion time O(Dd2 log3n). Finally, in order to determine the limits of our multibroadcast technique, we generalize (and improve) the best known lower bound [CR96] on the size of superimposed codes.

Journal ArticleDOI
18 Jun 2001
TL;DR: This work proves a general lower bound on the complexity of unbounded error probabilistic communication protocols for the functions defined by Hadamard matrices and gives an upperbound on the margin of any embedding of a concept class in half spaces.
Abstract: We prove a general lower bound on the complexity of unbounded error probabilistic communication protocols. This result improves on a lower bound for bounded error protocols from Krause (1996). As a simple consequence we get the, to our knowledge, first linear lower bound on the complexity of unbounded error probabilistic communication protocols for the functions defined by Hadamard matrices. We also give an upper bound on the margin of any embedding of a concept class in half spaces.

Journal ArticleDOI
TL;DR: To do so, the well-known Clément/Scott–Zhang interpolation operator is generalized to the hp-context and new polynomial inverse estimates are presented and an hp-adaptive strategy is proposed.
Abstract: A family ηα, α∈[0,1], of residual-based error indicators for the hp-version of the finite element method is presented and analyzed. Upper and lower bounds for the error indicators ηα are established. To do so, the well-known Clement/Scott–Zhang interpolation operator is generalized to the hp-context and new polynomial inverse estimates are presented. An hp-adaptive strategy is proposed. Numerical examples illustrate the performance of the error indicators and the adaptive strategy.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the case cg < c is very tightly constrained by the observation of the highest energy cosmic rays, and that if the cosmic rays have an extragalactic origin the bound is orders of magnitude tighter, of order c−cg < 2 × 10−19c.
Abstract: Recently, interesting 4-D Lorentz violating models have been proposed, in which all particles have a common maximum velocity c, but gravity propagates (in the preferred frame) with a different maximum velocity cg≠c. We show that the case cg < c is very tightly constrained by the observation of the highest energy cosmic rays. Assuming a galactic origin for the cosmic rays gives a conservative bound of c−cg <2 × 10−15c; if the cosmic rays have an extragalactic origin the bound is orders of magnitude tighter, of order c−cg < 2 × 10−19c.

Journal ArticleDOI
TL;DR: In this paper, a new general upper bound on the number of examples required to estimate all of the expectations of a set of random variables uniformly well is presented, and the quality of the estimates is measured using a variant of the relative error proposed by Haussler and Pollard.

Journal ArticleDOI
TL;DR: This paper develops Cramer-Rao bound (CRB) results for low-rank decomposition of three- and four-dimensional arrays, illustrates the behavior of the resulting bounds, and compares alternating least squares algorithms that are commonly used to compute such decompositions with the respective CRBs.
Abstract: Unlike low-rank matrix decomposition, which is generically nonunique for rank greater than one, low-rank three-and higher dimensional array decomposition is unique, provided that the array rank is lower than a certain bound, and the correct number of components (equal to array rank) is sought in the decomposition. Parallel factor (PARAFAC) analysis is a common name for low-rank decomposition of higher dimensional arrays. This paper develops Cramer-Rao bound (CRB) results for low-rank decomposition of three- and four-dimensional (3-D and 4-D) arrays, illustrates the behavior of the resulting bounds, and compares alternating least squares algorithms that are commonly used to compute such decompositions with the respective CRBs. Simple-to-check necessary conditions for a unique low-rank decomposition are also provided.

Journal ArticleDOI
TL;DR: Linear discrete-time systems with stochastic uncertainties in their state-space matrices are considered and the problems of finite-horizon filtering and output-feedback control are solved, taking into account possible cross-correlations between the uncertain parameters.

Journal ArticleDOI
TL;DR: An expectation-maximization algorithm for learning sparse and overcomplete data representations is presented, which exploits a variational approximation to a range of heavy-tailed distributions whose limit is the Laplacian.
Abstract: An expectation-maximization algorithm for learning sparse and overcomplete data representations is presented. The proposed algorithm exploits a variational approximation to a range of heavy-tailed distributions whose limit is the Laplacian. A rigorous lower bound on the sparse prior distribution is derived, which enables the analytic marginalization of a lower bound on the data likelihood. This lower bound enables the development of an expectation-maximization algorithm for learning the overcomplete basis vectors and inferring the most probable basis coefficients.

Journal ArticleDOI
TL;DR: A new fast algorithm based on the winner-update strategy which utilizes an ascending lower bound list of the matching error to determine the temporary winner and two lower bound lists derived by using partial distance and by using Minkowski's inequality are described.
Abstract: Block matching is a widely used method for stereo vision, visual tracking, and video compression. Many fast algorithms for block matching have been proposed in the past, but most of them do not guarantee that the match found is the globally optimal match in a search range. This paper presents a new fast algorithm based on the winner-update strategy which utilizes an ascending lower bound list of the matching error to determine the temporary winner. Two lower bound lists derived by using partial distance and by using Minkowski's inequality are described. The basic idea of the winner-update strategy is to avoid, at each search position, the costly computation of the matching error when there exists a lower bound larger than the global minimum matching error. The proposed algorithm can significantly speed up the computation of the block matching because (1) computational cost of the lower bound we use is less than that of the matching error itself; (2) an element in the ascending lower bound list will be calculated only when its preceding element has already been smaller than the minimum matching error computed so far; (3) for many search positions, only the first several lower bounds in the list need to be calculated. Our experiments have shown that, when applying to motion vector estimation for several widely-used test videos, 92% to 98% of operations can be saved while still guaranteeing the global optimality. Moreover, the proposed algorithm can be easily modified either to meet the limited time requirement or to provide an ordered list of best candidate matches.

Journal ArticleDOI
TL;DR: In this paper, a splitting type theorem for n-dimensional manifolds with a finite volume end was proved, which can be viewed as a study of the equality case of a theorem of Cheng.
Abstract: In this paper, we continued our investigation of complete manifolds whose spectrum of the Laplacian has an optimal positive lower bound. In particular, we proved a splitting type theorem for n-dimensional manifolds that have a finite volume end. This can be viewed as a study of the equality case of a theorem of Cheng.

Proceedings ArticleDOI
25 Jun 2001
TL;DR: An approach to estimation for continuous-time and discrete-time linear systems is proposed that is based on the idea of using switching observers, and a method is described to design a switching observer that aims to minimize the upper bound to the estimation cost function.
Abstract: An approach to estimation for continuous-time and discrete-time linear systems is proposed that is based on the idea of using switching observers. Convergence conditions have been found to ensure the stability of the error dynamics; in addition, they guarantee the existence of an upper bound to a quadratic cost function of the estimation error. The observer gains may be selected by solving a set of linear matrix inequalities (LMIs). A method is described to design a switching observer that aims to minimize the upper bound to the estimation cost function. Moreover, such a design may be efficiently accomplished by using an LMI algorithm.

Proceedings Article
03 Jan 2001
TL;DR: An algorithm is presented that induces a class of models with thin junction trees—models that are characterized by an upper bound on the size of the maximal cliques of their triangulated graph that allows both an efficient implementation of an iterative scaling parameter estimation algorithm and also ensures that inference can be performed efficiently with the final model.
Abstract: We present an algorithm that induces a class of models with thin junction trees—models that are characterized by an upper bound on the size of the maximal cliques of their triangulated graph. By ensuring that the junction tree is thin, inference in our models remains tractable throughout the learning process. This allows both an efficient implementation of an iterative scaling parameter estimation algorithm and also ensures that inference can be performed efficiently with the final model. We illustrate the approach with applications in handwritten digit recognition and DNA splice site detection.

Journal ArticleDOI
TL;DR: A simple generic approach for obtaining new fast lower bounds for the bin packing problem based on dual feasible functions is presented, which proves an asymptotic worst-case performance of 3/4 for a bound that can be computed in linear time for items sorted by size.
Abstract: The bin packing problem is one of the classical NP-hard optimization problems. In this paper, we present a simple generic approach for obtaining new fast lower bounds, based on dual feasible functions. Worst-case analysis as well as computational results show that one of our classes clearly outperforms the previous best “economical” lower bound for the bin packing problem by Martello and Toth, which can be understood as a special case. In particular, we prove an asymptotic worst-case performance of 3/4 for a bound that can be computed in linear time for items sorted by size. In addition, our approach provides a general framework for establishing new bounds.

Journal ArticleDOI
TL;DR: In this article, it was shown that n∞(d, e) depends linearly on d and at most quadratically on e−1, where e is the number of points in dimension d with the ∗-discrepancy at most e.
Abstract: We study bounds on the classical ∗-discrepancy and on its inverse. Let n∞(d, e) be the inverse of the ∗-discrepancy, i.e., the minimal number of points in dimension d with the ∗-discrepancy at most e. We prove that n∞(d, e) depends linearly on d and at most quadratically on e−1. We present three upper bounds on n∞(d, e), all of them are based on probabilistic arguments and therefore they are non-constructive. The linear in d upper bound directly follows from deep results of the theory of empirical processes but it contains an unknown multiplicative factor. Two other upper bounds are without unknown factors but do not yield the linear (in d) upper bound. One upper bound is based on an average case analysis for the Lp-star discrepancy and our numerical results seem to indicate that it gives the best estimates for specific values of d and e. We also present two lower bounds on n∞(d, e). For lower bounds, we allow arbitrary coefficients in the discrepancy formula. We prove that n∞(d, e) must be of order d log e−1 and, roughly, of order dλe−(1−λ) for any λ ∈ (0, 1).