scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 2009"


Book ChapterDOI
28 Mar 2009
TL;DR: A related system for bounding the stack space requirements is described which uses the depth of data structures, by expressing potential in terms of maxima as well as sums, by adding extra structure to typing contexts to describe the form of the bounds.
Abstract: Hofmann and Jost have presented a heap space analysis [1] that finds linear space bounds for many functional programs. It uses an amortised analysis: assigning hypothetical amounts of free space (called potential) to data structures in proportion to their sizes using type annotations. Constraints on these annotations in the type system ensure that the total potential assigned to the input is an upper bound on the total memory required to satisfy all allocations. We describe a related system for bounding the stack space requirements which uses the depth of data structures, by expressing potential in terms of maxima as well as sums. This is achieved by adding extra structure to typing contexts (inspired by O'Hearn's bunched typing [2]) to describe the form of the bounds. We will also present the extra steps that must be taken to construct a typing during the analysis.

1,398 citations


Journal ArticleDOI
TL;DR: In this paper, the perturbation approach originally introduced by Moller and Plesset, terminated at finite order, is compared from the point of view of requirements for theoretical chemical models.
Abstract: Some methods of describing electron correlation are compared from the point of view of requirements for theoretical chemical models. The perturbation approach originally introduced by Moller and Plesset, terminated at finite order, is found to satisfy most of these requirements. It is size consistent, that is, applicable to an ensemble of isolated systems in an additive manner. On the other hand, it does not provide an upper bound for the electronic energy. The independent electron-pair approximation is accurate to second order in a Moller-Plesset expansion, but inaccurate in third order. A series of variational methods is discussed which gives upper bounds for the energy, but which lacks size consistency. Finally, calculations on some small molecules using a moderately large Gaussian basis are presented to illustrate these points. Equilibrium geometries, dissociation energies, and energy separations between electronic states of different spin multiplicities are describe substantially better by Moller--Plesset theory to second or third order than by Hartree--Fock theory.

1,217 citations


01 Jan 2009
TL;DR: Most changes to the variables are an approximate solution to a trust region subproblem, using the current quadratic model, with a lower bound on the trust region radius that is reduced cautiously, in order to keep the interpolation points well separated until late in the calculation, which lessens damage from computer rounding errors.
Abstract: BOBYQA is an iterative algorithm for finding a minimum of a function F(x), x2R n , subject to bounds axb on the variables, F being specified by a "black box" that returns the value F(x) for any feasible x. Each iteration employs a quadratic approximation Q to F that satisfies Q(y j )= F(y j ), j =1 ,2,...,m, the interpolation points y j being chosen and adjusted automatically, but m is a prescribed constant, the value m =2 n+1 being typical. These conditions leave much freedom in Q, taken up when the model is updated by the highly successful technique of minimizing the Frobenius norm of the change to the second derivative matrix of Q. Thus no first derivatives of F are required explicitly. Most changes to the variables are an approximate solution to a trust region subproblem, using the current quadratic model, with a lower bound on the trust region radius that is reduced cautiously, in order to keep the interpolation points well separated until late in the calculation, which lessens damage from computer rounding errors. Some other changes to the variables are designed to improve the model without reducing F. These techniques are described. Other topics include the starting procedure that is given an initial vector of variables, the value of m and the initial trust region radius. There is also a new device called RESCUE that tries to restore normality if severe loss of accuracy occurs in the matrix calculations of the updating of the model. Numerical results are reported and discussed for two test problems, the numbers of variables being between 10 and 320.

1,148 citations


Journal ArticleDOI
TL;DR: The influence of the network characteristics on the virus spread is analyzed in a new-the N -intertwined Markov chain-model, whose only approximation lies in the application of mean field theory.
Abstract: The influence of the network characteristics on the virus spread is analyzed in a new-the N -intertwined Markov chain-model, whose only approximation lies in the application of mean field theory. The mean field approximation is quantified in detail. The N -intertwined model has been compared with the exact 2N-state Markov model and with previously proposed ldquohomogeneousrdquo or ldquolocalrdquo models. The sharp epidemic threshold tauc , which is a consequence of mean field theory, is rigorously shown to be equal to tauc = 1/(lambdamax(A)) , where lambdamax(A) is the largest eigenvalue-the spectral radius-of the adjacency matrix A . A continued fraction expansion of the steady-state infection probability at node j is presented as well as several upper bounds.

1,000 citations


Journal ArticleDOI
01 Nov 2009
TL;DR: In this paper, the upper and lower bounds for the normal order of the Erd˝ os-Hooley Δ-function were improved for the case where n ∈ N ∗.
Abstract: We improve the current upper and lower bounds for the normal order of the Erd˝ os-Hooley Δ-function Δ(n) := sup∈R � d|n 0

741 citations


Journal ArticleDOI
TL;DR: The results establish a direct connection between min- and max-entropies, known to characterize information-processing tasks such as randomness extraction and state merging, and basic operational problems.
Abstract: In this paper, we show that the conditional min-entropy H min(A |B) of a bipartite state rhoAB is directly related to the maximum achievable overlap with a maximally entangled state if only local actions on the B-part of rhoAB are allowed. In the special case where A is classical, this overlap corresponds to the probability of guessing A given B. In a similar vein, we connect the conditional max-entropy H max(A |B) to the maximum fidelity of rhoAB with a product state that is completely mixed on A. In the case where A is classical, this corresponds to the security of A when used as a secret key in the presence of an adversary holding B. Because min- and max-entropies are known to characterize information-processing tasks such as randomness extraction and state merging, our results establish a direct connection between these tasks and basic operational problems. For example, they imply that the (logarithm of the) probability of guessing A given B is a lower bound on the number of uniform secret bits that can be extracted from A relative to an adversary holding B.

692 citations


Journal ArticleDOI
TL;DR: A variant of the basic algorithm for the stochastic, multi-armed bandit problem that takes into account the empirical variance of the different arms is considered, providing the first analysis of the expected regret for such algorithms.

590 citations


Journal ArticleDOI
TL;DR: It is shown that a simple adaptation of a consensus algorithm leads to an averaging algorithm, and lower bounds on the worst-case convergence time for various classes of linear, time-invariant, distributed consensus methods are proved.
Abstract: We study the convergence speed of distributed iterative algorithms for the consensus and averaging problems, with emphasis on the latter. We first consider the case of a fixed communication topology. We show that a simple adaptation of a consensus algorithm leads to an averaging algorithm. We prove lower bounds on the worst-case convergence time for various classes of linear, time-invariant, distributed consensus methods, and provide an algorithm that essentially matches those lower bounds. We then consider the case of a time-varying topology, and provide a polynomial-time averaging algorithm.

563 citations


Journal ArticleDOI
TL;DR: The numerical results demonstrated that the NS-FEM possesses the following properties: upper bound in the strain energy of the exact solution when a reasonably fine mesh is used; well immune from the volumetric locking; and insensitive to element distortion.

542 citations


Journal ArticleDOI
TL;DR: A new delay-dependent stability criterion for systems with a delay varying in an interval with a different Lyapunov functional defined and a tight upper bound of its derivative is given.

520 citations


Proceedings Article
18 Jun 2009
TL;DR: This work fills in a long open gap in the characterization of the minimax rate for the multi-armed bandit prob- lem and proposes a new family of randomized algorithms based on an implicit normalization, as well as a new analysis.
Abstract: We fill in a long open gap in the characterization of the minimax rate for the multi-armed bandit prob- lem Concretely, we remove an extraneous loga- rithmic factor in the previously known upper bound and propose a new family of randomized algorithms based on an implicit normalization, as well as a new analysis We also consider the stochastic case, and prove that an appropriate modification of the upper confidence bound policy UCB1 (Auer et al, 2002) achieves the distribution-free optimal rate while still having a distribution-dependent rate log- arithmic in the number of plays

Journal ArticleDOI
TL;DR: A lower bound on the coupling gain is derived that is sufficient to guarantee oscillator synchronization and further sufficient conditions are derived to ensure exponential synchronization of the angular frequencies of all oscillators to the mean natural frequency of the group.
Abstract: In this technical note we study the problem of exponential synchronization for one of the most popular models of coupled phase oscillators, the Kuramoto model We consider the special case of finite oscillators with distinct, bounded natural frequencies Our first result derives a lower bound on the coupling gain which is necessary for the onset of synchronization This bound improves the one derived by Jadbabaie We then calculate a lower bound on the coupling gain that is sufficient to guarantee oscillator synchronization and derive further sufficient conditions to ensure exponential synchronization of the angular frequencies of all oscillators to the mean natural frequency of the group We also characterize the coupling gain that is sufficient for the oscillator phase differences to approach any desired compact set in finite time

Journal ArticleDOI
TL;DR: This paper derives the distributional properties of the interference and provides upper and lower bounds for its distribution, and considers the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh.
Abstract: In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.

Journal ArticleDOI
TL;DR: Upper and lower bounds are derived on the capacity of the free-space optical intensity channel, which has a nonnegative input (representing the transmitted optical intensity), which is corrupted by additive white Gaussian noise.
Abstract: Upper and lower bounds are derived on the capacity of the free-space optical intensity channel. This channel has a nonnegative input (representing the transmitted optical intensity), which is corrupted by additive white Gaussian noise. To preserve the battery and for safety reasons, the input is constrained in both its average and its peak power. For a fixed ratio of the allowed average power to the allowed peak power, the difference between the upper and the lower bound tends to zero as the average power tends to infinity and their ratio tends to one as the average power tends to zero. When only an average power constraint is imposed on the input, the difference between the bounds tends to zero as the allowed average power tends to infinity, and their ratio tends to a constant as the allowed average power tends to zero.

Proceedings ArticleDOI
31 May 2009
TL;DR: Near-optimal space bounds are given in the streaming model for linear algebra problems that include estimation of matrix products, linear regression, low-rank approximation, and approximation of matrix rank; results for turnstile updates are proved.
Abstract: We give near-optimal space bounds in the streaming model for linear algebra problems that include estimation of matrix products, linear regression, low-rank approximation, and approximation of matrix rank. In the streaming model, sketches of input matrices are maintained under updates of matrix entries; we prove results for turnstile updates, given in an arbitrary order. We give the first lower bounds known for the space needed by the sketches, for a given estimation error e. We sharpen prior upper bounds, with respect to combinations of space, failure probability, and number of passes. The sketch we use for matrix A is simply STA, where S is a sign matrix. Our results include the following upper and lower bounds on the bits of space needed for 1-pass algorithms. Here A is an n x d matrix, B is an n x d' matrix, and c := d+d'. These results are given for fixed failure probability; for failure probability δ>0, the upper bounds require a factor of log(1/δ) more space. We assume the inputs have integer entries specified by O(log(nc)) bits, or O(log(nd)) bits. (Matrix Product) Output matrix C with F(ATB-C) ≤ e F(A) F(B). We show that Θ(ce-2log(nc)) space is needed. (Linear Regression) For d'=1, so that B is a vector b, find x so that Ax-b ≤ (1+e) minx' ∈ Reald Ax'-b. We show that Θ(d2e-1 log(nd)) space is needed. (Rank-k Approximation) Find matrix tAk of rank no more than k, so that F(A-tAk) ≤ (1+e) F{A-Ak}, where Ak is the best rank-k approximation to A. Our lower bound is Ω(ke-1(n+d)log(nd)) space, and we give a one-pass algorithm matching this when A is given row-wise or column-wise. For general updates, we give a one-pass algorithm needing [O(ke-2(n + d/e2)log(nd))] space. We also give upper and lower bounds for algorithms using multiple passes, and a sketching analog of the CUR decomposition.

Journal ArticleDOI
TL;DR: The fundamental lower bounds on the thermodynamic energy cost of measurement and information erasure are determined and constitute the second law of "information thermodynamics," in which information content and thermodynamic variables are treated on an equal footing.
Abstract: The fundamental lower bounds on the thermodynamic energy cost of measurement and information erasure are determined. The lower bound on the erasure validates Landauer's principle for a symmetric memory; for other cases, the bound indicates the breakdown of the principle. Our results constitute the second law of "information thermodynamics," in which information content and thermodynamic variables are treated on an equal footing.

Journal ArticleDOI
TL;DR: Findings show that the HLRT suffers from very high complexity, whereas the QHLRT provides a reasonable solution, and an upper bound on the performance of QHL RT-based algorithms, which employ unbiased and normally distributed non-data aided estimates of the unknown parameters, is proposed.
Abstract: In this paper, likelihood-based algorithms are explored for linear digital modulation classification. Hybrid likelihood ratio test (HLRT)- and quasi HLRT (QHLRT)- based algorithms are examined, with signal amplitude, phase, and noise power as unknown parameters. The algorithm complexity is first investigated, and findings show that the HLRT suffers from very high complexity, whereas the QHLRT provides a reasonable solution. An upper bound on the performance of QHLRT-based algorithms, which employ unbiased and normally distributed non-data aided estimates of the unknown parameters, is proposed. This is referred to as the QHLRT-Upper Bound (QHLRT-UB). Classification of binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK) signals is presented as a case study. The Cramer-Rao Lower Bounds (CRBs) of non-data aided joint estimates of signal amplitude and phase, and noise power are derived for BPSK and QPSK signals, and further employed to obtain the QHLRT-UB. An upper bound on classification performance of any likelihood-based algorithms is also introduced. Method-of-moments (MoM) estimates of the unknown parameters are investigated and used to develop the QHLRT-based algorithm. Classification performance of this algorithm is compared with the upper bounds, as well as with the quasi Log-Likelihood Ratio (qLLR) and fourth-order cumulant based algorithms.

Proceedings ArticleDOI
31 May 2009
TL;DR: This work proves a general and fundamental connection between the price of anarchy and its seemingly stronger relatives in classes of games with a sum objective, and identifies classes ofgames that are tight, in the sense that smoothness arguments are guaranteed to produce an optimal worst-case upper bound on the POA, even for the smallest set of interest (pure Nash equilibria).
Abstract: The price of anarchy (POA) is a worst-case measure of the inefficiency of selfish behavior, defined as the ratio of the objective function value of a worst Nash equilibrium of a game and that of an optimal outcome. This measure implicitly assumes that players successfully reach some Nash equilibrium. This drawback motivates the search for inefficiency bounds that apply more generally to weaker notions of equilibria, such as mixed Nash and correlated equilibria; or to sequences of outcomes generated by natural experimentation strategies, such as successive best responses or simultaneous regret-minimization. We prove a general and fundamental connection between the price of anarchy and its seemingly stronger relatives in classes of games with a sum objective. First, we identify a "canonical sufficient condition" for an upper bound of the POA for pure Nash equilibria, which we call a smoothness argument. Second, we show that every bound derived via a smoothness argument extends automatically, with no quantitative degradation in the bound, to mixed Nash equilibria, correlated equilibria, and the average objective function value of regret-minimizing players (or "price of total anarchy"). Smoothness arguments also have automatic implications for the inefficiency of approximate and Bayesian-Nash equilibria and, under mild additional assumptions, for bicriteria bounds and for polynomial-length best-response sequences. We also identify classes of games --- most notably, congestion games with cost functions restricted to an arbitrary fixed set --- that are tight, in the sense that smoothness arguments are guaranteed to produce an optimal worst-case upper bound on the POA, even for the smallest set of interest (pure Nash equilibria). Byproducts of our proof of this result include the first tight bounds on the POA in congestion games with non-polynomial cost functions, and the first structural characterization of atomic congestion games that are universal worst-case examples for the POA.

Journal ArticleDOI
TL;DR: It is shown that by using a simple linearization technique incorporating a bounding inequality, a unified framework can be developed such that both the full-order and reduced-order filters can be obtained by solving a set of linear matrix inequalities (LMIs), which are numerically efficient with commercially available software.
Abstract: This paper investigates the problem of delay-dependent robust H infin filtering design for a class of uncertain discrete-time state-delayed Takagi-Sugeno (T-S) fuzzy systems. The state delay is assumed to be time-varying and of an interval-like type, which means that both the lower and upper bounds of the time-varying delay are available. The parameter uncertainties are assumed to have a structured linear fractional form. Based on a novel fuzzy-basis-dependent Lyapunov-Krasovskii functional combined with Finsler's lemma and an improved free-weighting matrix technique for delay-dependent criteria, a new sufficient condition for robust H infin performance analysis is first derived, and then, the filter synthesis is developed. It is shown that by using a simple linearization technique incorporating a bounding inequality, a unified framework can be developed such that both the full-order and reduced-order filters can be obtained by solving a set of linear matrix inequalities (LMIs), which are numerically efficient with commercially available software. Finally, simulation examples are provided to illustrate the advantages and less conservatism of the proposed approach.

Proceedings ArticleDOI
25 Oct 2009
TL;DR: In this article, the authors study the online stochastic bipartite matching problem, in a form motivated by display ad allocation on the Internet, and show that no online algorithm can achieve an approximation ratio better than 0.632.
Abstract: We study the online stochastic bipartite matching problem, in a form motivated by display ad allocation on the Internet. In the online, but adversarial case, the celebrated result of Karp, Vazirani and Vazirani gives an approximation ratio of $1-{1\over e} \simeq 0.632$, a very familiar bound that holds for many online problems; further, the bound is tight in this case. In the online, stochastic case when nodes are drawn repeatedly from a known distribution, the greedy algorithm matches this approximation ratio, but still, no algorithm is known that beats the $1 - {1\over e}$ bound.Our main result is a $0.67$-approximation online algorithm for stochastic bipartite matching, breaking this $1 - {1\over e}$ barrier. Furthermore, we show that no online algorithm can produce a $1-\epsilon$ approximation for an arbitrarily small $\epsilon$ for this problem. Our algorithms are based on computing an optimal offline solution to the expected instance, and using this solution as a guideline in the process of online allocation. We employ a novel application of the idea of the power of two choices from load balancing: we compute two disjoint solutions to the expected instance, and use both of them in the online algorithm in a prescribed preference order. To identify these two disjoint solutions, we solve a max flow problem in a boosted flow graph, and then carefully decompose this maximum flow to two edge-disjoint (near-) matchings. In addition to guiding the online decision making, these two offline solutions are used to characterize an upper bound for the optimum in any scenario. This is done by identifying a cut whose value we can bound under the arrival distribution. At the end, we discuss extensions of our results to more general bipartite allocations that are important in a display ad application.

Journal ArticleDOI
TL;DR: In this article, the authors present lower bounds on the mass of DM particles, coming from various dSphs, using both information about the current phase space distribution and both the initial and final distributions.
Abstract: We discuss the bounds on the mass of Dark Matter (DM) particles, coming from the analysis of DM phase-space distribution in dwarf spheroidal galaxies (dSphs). After reviewing the existing approaches, we choose two methods to derive such a bound. The first one depends on the information about the current phase space distribution of DM particles only, while the second one uses both the initial and final distributions. We discuss the recent data on dSphs as well as astronomical uncertainties in relevant parameters. As an application, we present lower bounds on the mass of DM particles, coming from various dSphs, using both methods. The model-independent bound holds for any type of fermionic DM. Stronger, model-dependent bounds are quoted for several DM models (thermal relics, non-resonantly and resonantly produced sterile neutrinos, etc.). The latter bounds rely on the assumption that baryonic feedback cannot significantly increase the maximum of a distribution function of DM particles. For the scenario in which all the DM is made of sterile neutrinos produced via non-resonant mixing with the active neutrinos (NRP) this gives mNRP > 1.7 keV. Combining these results in their most conservative form with the X-ray bounds of DM decay lines, we conclude that the NRP scenario remains allowed in a very narrow parameter window only. This conclusion is independent of the results of the Lyman-α analysis. The DM model in which sterile neutrinos are resonantly produced in the presence of lepton asymmetry remains viable. Within the minimal neutrino extension of the Standard Model (the νMSM), both mass and the mixing angle of the DM sterile neutrino are bounded from above and below, which suggests the possibility for its experimental search.

Journal ArticleDOI
TL;DR: In this article, it was shown that the density of eigenvalues concentrates around the Wigner semicircle law on energy scales, up to the logarithmic factor.
Abstract: We consider N × N Hermitian random matrices with independent identical distributed entries. The matrix is normalized so that the average spacing between consecutive eigenvalues is of order 1/N. Under suitable assumptions on the distribution of the single matrix element, we prove that, away from the spectral edges, the density of eigenvalues concentrates around the Wigner semicircle law on energy scales \({\eta \gg N^{-1} (\log N)^8}\). Up to the logarithmic factor, this is the smallest energy scale for which the semicircle law may be valid. We also prove that for all eigenvalues away from the spectral edges, the l∞-norm of the corresponding eigenvectors is of order O(N−1/2), modulo logarithmic corrections. The upper bound O(N−1/2) implies that every eigenvector is completely delocalized, i.e., the maximum size of the components of the eigenvector is of the same order as their average size.

Journal ArticleDOI
TL;DR: A lower bound of Omega(n log n) is derived for the complexity of computing the hypervolume indicator in any number of dimensions d > 1 by reducing the so-called uniformgap problem to it.
Abstract: The goal of multiobjective optimization is to find a set of best compromise solutions for typically conflicting objectives. Due to the complex nature of most real-life problems, only an approximation to such an optimal set can be obtained within reasonable (computing) time. To compare such approximations, and thereby the performance of multiobjective optimizers providing them, unary quality measures are usually applied. Among these, the hypervolume indicator (or S-metric) is of particular relevance due to its favorable properties. Moreover, this indicator has been successfully integrated into stochastic optimizers, such as evolutionary algorithms, where it serves as a guidance criterion for finding good approximations to the Pareto front. Recent results show that computing the hypervolume indicator can be seen as solving a specialized version of Klee's measure problem. In general, Klee's measure problem can be solved with O(n logn + nd/2logn) comparisons for an input instance of size n in d dimensions; as of this writing, it is unknown whether a lower bound higher than Omega(n log n) can be proven. In this paper, we derive a lower bound of Omega(n log n) for the complexity of computing the hypervolume indicator in any number of dimensions d > 1 by reducing the so-called uniformgap problem to it. For the 3-D case, we also present a matching upper bound of O(n log n) comparisons that is obtained by extending an algorithm for finding the maxima of a point set.

Journal ArticleDOI
TL;DR: The idea is that a smarter measure may capture behaviors of the algorithm that a standard measure might not be able to exploit, and hence lead to a significantly better worst-case time analysis, in order to step beyond limitations of current algorithms design.
Abstract: For more than 40 years, Branch & Reduce exponential-time backtracking algorithms have been among the most common tools used for finding exact solutions of NP-hard problems. Despite that, the way to analyze such recursive algorithms is still far from producing tight worst-case running time bounds. Motivated by this, we use an approach, that we call “Measure & Conquer”, as an attempt to step beyond such limitations. The approach is based on the careful design of a nonstandard measure of the subproblem size; this measure is then used to lower bound the progress made by the algorithm at each branching step. The idea is that a smarter measure may capture behaviors of the algorithm that a standard measure might not be able to exploit, and hence lead to a significantly better worst-case time analysis.In order to show the potentialities of Measure & Conquer, we consider two well-studied NP-hard problems: minimum dominating set and maximum independent set. For the first problem, we consider the current best algorithm, and prove (thanks to a better measure) a much tighter running time bound for it. For the second problem, we describe a new, simple algorithm, and show that its running time is competitive with the current best time bounds, achieved with far more complicated algorithms (and standard analysis).Our examples show that a good choice of the measure, made in the very first stages of exact algorithms design, can have a tremendous impact on the running time bounds achievable.

Book ChapterDOI
06 Jul 2009
TL;DR: This paper focuses on developing fast polynomial time algorithms for several variations of dense subgraph problems for both directed and undirected graphs and shows that the problem is NP-complete and gives fast algorithms to find subgraphs within a factor 2 of the optimum density.
Abstract: Given an undirected graph G = (V ,E ), the density of a subgraph on vertex set S is defined as $d(S)=\frac{|E(S)|}{|S|}$, where E (S ) is the set of edges in the subgraph induced by nodes in S . Finding subgraphs of maximum density is a very well studied problem. One can also generalize this notion to directed graphs. For a directed graph one notion of density given by Kannan and Vinay [12] is as follows: given subsets S and T of vertices, the density of the subgraph is $d(S,T)=\frac{|E(S,T)|}{\sqrt{|S||T|}}$, where E (S ,T ) is the set of edges going from S to T . Without any size constraints, a subgraph of maximum density can be found in polynomial time. When we require the subgraph to have a specified size, the problem of finding a maximum density subgraph becomes NP -hard. In this paper we focus on developing fast polynomial time algorithms for several variations of dense subgraph problems for both directed and undirected graphs. When there is no size bound, we extend the flow based technique for obtaining a densest subgraph in directed graphs and also give a linear time 2-approximation algorithm for it. When a size lower bound is specified for both directed and undirected cases, we show that the problem is NP-complete and give fast algorithms to find subgraphs within a factor 2 of the optimum density. We also show that solving the densest subgraph problem with an upper bound on size is as hard as solving the problem with an exact size constraint, within a constant factor.

Journal ArticleDOI
TL;DR: This article presents a worst-case O(n) 3-time algorithm for the problem when the two trees have size n, and proves the optimality of the algorithm among the family of decomposition strategy algorithms—which also includes the previous fastest algorithms—by tightening the known lower bound.
Abstract: The edit distance between two ordered rooted trees with vertex labels is the minimum cost of transforming one tree into the other by a sequence of elementary operations consisting of deleting and relabeling existing nodes, as well as inserting new nodes. In this article, we present a worst-case O(n3)-time algorithm for the problem when the two trees have size n, improving the previous best O(n3 log n)-time algorithm. Our result requires a novel adaptive strategy for deciding how a dynamic program divides into subproblems, together with a deeper understanding of the previous algorithms for the problem. We prove the optimality of our algorithm among the family of decomposition strategy algorithms—which also includes the previous fastest algorithms—by tightening the known lower bound of Ω(n2 log2n) to Ω(n3), matching our algorithm's running time. Furthermore, we obtain matching upper and lower bounds for decomposition strategy algorithms of Θ(nm2 (1 + log n/m)) when the two trees have sizes m and n and m

Journal ArticleDOI
01 Sep 2009
TL;DR: This work takes a major step towards closing the gap between the upper bound and the conjectured lower bound by presenting an algorithm running in time $n\log n\,2^{O(\log^*n)}$.
Abstract: For more than 35 years, the fastest known method for integer multiplication has been the Schonhage-Strassen algorithm running in time $O(n\log n\log\log n)$. Under certain restrictive conditions, there is a corresponding $\Omega(n\log n)$ lower bound. All this time, the prevailing conjecture has been that the complexity of an optimal integer multiplication algorithm is $\Theta(n\log n)$. We take a major step towards closing the gap between the upper bound and the conjectured lower bound by presenting an algorithm running in time $n\log n\,2^{O(\log^*n)}$. The running time bound holds for multitape Turing machines. The same bound is valid for the size of Boolean circuits.

Journal ArticleDOI
TL;DR: This work generalizes a lower bound on the amount of communication needed to perform dense, n-by-n matrix multiplication using the conventional O(n3) algorithm to a much wider variety of algorithms, including LU factorization, Cholesky factors, LDLT factors, QR factors, the Gram–Schmidt algorithm, and algorithms for eigenvalues and singular values.
Abstract: In 1981 Hong and Kung [HK81] proved a lower bound on the amount of communication (amount of data moved between a small, fast memory and large, slow memory) needed to perform dense,n-by-nmatrix-multiplication using the conventionalO(n 3 ) algorithm, where the input matrices were too large to fit in the small, fast memory. In 2004 Irony, Toledo and Tiskin [ITT04] gave a new proof of this result and extended it to the parallel case (where communication means the amount of data moved between processors). In both cases the lower bound may be expressed as !(#arithmetic operations / ! M), whereMis the size of the fast memory (or local memory in the parallel case). Here we generalize these results to a much wider variety of algorithms, including LU factorization, Cholesky factorization,LDL T factorization, QR factorization, algorithms for eigenvalues and singular values, i.e., essentially all direct methods of linear algebra. The proof works for dense or sparse matrices, and for sequential or parallel algorithms. In addition to lower bounds on the amount of data moved (bandwidth) we get lower bounds on the number of messages required to move it (latency). We illustrate how to extend our lower bound technique to compositions of linear algebra operations (like computing powers of a matrix), to decide whether it is enough to call a sequence of simpler optimal algorithms (like matrix multiplication) to minimize communication, or if we can do better. We give examples of both. We also show how to extend our lower bounds to certain graph theoretic problems. We point out recently designed algorithms for dense LU, Cholesky, QR, eigenvalue and the SVD problems that attain these lower bounds; implementations of LU and QR show large speedups over conventional linear algebra algorithms in standard libraries like LAPACK and ScaLAPACK. Many open problems remain.

Journal ArticleDOI
TL;DR: An anytime algorithm to solve the coalition structure generation problem, which uses a novel representation of the search space, which partitions the space of possible solutions into sub-spaces such that it is possible to compute upper and lower bounds on the values of the best coalition structures in them.
Abstract: Coalition formation is a fundamental type of interaction that involves the creation of coherent groupings of distinct, autonomous, agents in order to efficiently achieve their individual or collective goals. Forming effective coalitions is a major research challenge in the field of multi-agent systems. Central to this endeavour is the problem of determining which of the many possible coalitions to form in order to achieve some goal. This usually requires calculating a value for every possible coalition, known as the coalition value, which indicates how beneficial that coalition would be if it was formed. Once these values are calculated, the agents usually need to find a combination of coalitions, in which every agent belongs to exactly one coalition, and by which the overall outcome of the system is maximized. However, this coalition structure generation problem is extremely challenging due to the number of possible solutions that need to be examined, which grows exponentially with the number of agents involved. To date, therefore, many algorithms have been proposed to solve this problem using different techniques -- ranging from dynamic programming, to integer programming, to stochastic search -- all of which suffer from major limitations relating to execution time, solution quality, and memory requirements. With this in mind, we develop an anytime algorithm to solve the coalition structure generation problem. Specifically, the algorithm uses a novel representation of the search space, which partitions the space of possible solutions into sub-spaces such that it is possible to compute upper and lower bounds on the values of the best coalition structures in them. These bounds are then used to identify the sub-spaces that have no potential of containing the optimal solution so that they can be pruned. The algorithm, then, searches through the remaining sub-spaces very efficiently using a branch-and-bound technique to avoid examining all the solutions within the searched subspace(s). In this setting, we prove that our algorithm enumerates all coalition structures efficiently by avoiding redundant and invalid solutions automatically. Moreover, in order to effectively test our algorithm we develop a new type of input distribution which allows us to generate more reliable benchmarks compared to the input distributions previously used in the field. Given this new distribution, we show that for 27 agents our algorithm is able to find solutions that are optimal in 0.175% of the time required by the fastest available algorithm in the literature. The algorithm is anytime, and if interrupted before it would have normally terminated, it can still provide a solution that is guaranteed to be within a bound from the optimal one. Moreover, the guarantees we provide on the quality of the solution are significantly better than those provided by the previous state of the art algorithms designed for this purpose. For example, for the worst case distribution given 25 agents, our algorithm is able to find a 90% efficient solution in around 10% of time it takes to find the optimal solution.

Journal ArticleDOI
TL;DR: The fundamental limit of the operation rate of any information processing system is established: for any values of DeltaE and E, there exists a one-parameter family of initial states that can approach the bound arbitrarily close when the parameter approaches its limit.
Abstract: How fast a quantum state can evolve has attracted considerable attention in connection with quantum measurement and information processing. A lower bound on the orthogonalization time, based on the energy spread DeltaE, was found by Mandelstam and Tamm. Another bound, based on the average energy E, was established by Margolus and Levitin. The bounds coincide and can be attained by certain initial states if DeltaE=E. Yet, the problem remained open when DeltaE not equal E. We consider the unified bound that involves both DeltaE and E. We prove that there exist no initial states that saturate the bound if DeltaE not equal E. However, the bound remains tight: for any values of DeltaE and E, there exists a one-parameter family of initial states that can approach the bound arbitrarily close when the parameter approaches its limit. These results establish the fundamental limit of the operation rate of any information processing system.