scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 2007"


Journal ArticleDOI
TL;DR: In this paper, wave spectrum parameters related to transport, distribution and variability of wave energy in the sea are explained. But, the authors do not consider the effect of wave interference on the performance of wave-energy converters.

920 citations


Journal ArticleDOI
TL;DR: An O(N/sup k/(k+1)/) query quantum algorithm is given for the generalization of element distinctness in which the authors have to find k equal items among N items.
Abstract: We use quantum walks to construct a new quantum algorithm for element distinctness and its generalization. For element distinctness (the problem of finding two equal items among $N$ given items), we get an $O(N^{2/3})$ query quantum algorithm. This improves the previous $O(N^{3/4})$ quantum algorithm of Buhrman et al. [SIAM J. Comput., 34 (2005), pp. 1324-1330] and matches the lower bound of Aaronson and Shi [J. ACM, 51 (2004), pp. 595-605]. We also give an $O(N^{k/(k+1)})$ query quantum algorithm for the generalization of element distinctness in which we have to find $k$ equal items among $N$ items.

593 citations


Journal ArticleDOI
TL;DR: This paper presents a methodology for safety verification of continuous and hybrid systems in the worst-case and stochastic settings, and computes an upper bound on the probability that a trajectory of the system reaches the unsafe set, a bound whose validity is proven by the existence of a barrier certificate.
Abstract: This paper presents a methodology for safety verification of continuous and hybrid systems in the worst-case and stochastic settings. In the worst-case setting, a function of state termed barrier certificate is used to certify that all trajectories of the system starting from a given initial set do not enter an unsafe region. No explicit computation of reachable sets is required in the construction of barrier certificates, which makes it possible to handle nonlinearity, uncertainty, and constraints directly within this framework. In the stochastic setting, our method computes an upper bound on the probability that a trajectory of the system reaches the unsafe set, a bound whose validity is proven by the existence of a barrier certificate. For polynomial systems, barrier certificates can be constructed using convex optimization, and hence the method is computationally tractable. Some examples are provided to illustrate the use of the method.

572 citations


Journal ArticleDOI
TL;DR: This note uses not only the time-varying-delayed state x(t-h(t)) but also the delay-upper-bounded state x([email protected]?) to exploit all possible information for the relationship among a current state x (t), an exactly delayed state x h, a marginally delayed state X([email-protected]?), and the derivative of the state [email protected]?(t).

442 citations


Journal ArticleDOI
TL;DR: This work reviews a not widely known approach to the max-sum labeling problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and shows how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product.
Abstract: The max-sum labeling problem, defined as maximizing a sum of binary (ie, pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product In particular, we review Schlesinger et al's upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound We revisit problems with Boolean variables and supermodular problems We describe two algorithms for decreasing the upper bound We present an example application for structural image analysis

410 citations


Journal ArticleDOI
TL;DR: A precise analysis of what kind of penalties should be used in order to perform model selection via the minimization of a penalized least-squares type criterion within some general Gaussian framework including the classical ones is mainly devoted.
Abstract: This paper is mainly devoted to a precise analysis of what kind of penalties should be used in order to perform model selection via the minimiza- tion of a penalized least-squares type criterion within some general Gaussian framework including the classical ones. As compared to our previous paper on this topic (Birge and Massart in J. Eur. Math. Soc. 3, 203-268 (2001)), more elaborate forms of the penalties are given which are shown to be, in some sense, optimal. We indeed provide more precise upper bounds for the risk of the penalized estimators and lower bounds for the penalty terms, showing that the use of smaller penalties may lead to disastrous results. These lower bounds may also be used to design a practical strategy that allows to estimate the penalty from the data when the amount of noise is unknown. We provide an illustra- tion of the method for the problem of estimating a piecewise constant signal in Gaussian noise when neither the number, nor the location of the change points are known.

393 citations


Journal ArticleDOI
TL;DR: A simple, novel, and general method for approximating the sum of independent or arbitrarily correlated lognormal random variables (RV) by a single logn formalism RV without the extremely precise numerical computations at a large number of points that were required by the previously proposed methods.
Abstract: A simple, novel, and general method is presented in this paper for approximating the sum of independent or arbitrarily correlated lognormal random variables (RV) by a single lognormal RV. The method is also shown to be applicable for approximating the sum of lognormal-Rice and Suzuki RVs by a single lognormal RV. A sum consisting of a mixture of the above distributions can also be easily handled. The method uses the moment generating function (MGF) as a tool in the approximation and does so without the extremely precise numerical computations at a large number of points that were required by the previously proposed methods in the literature. Unlike popular approximation methods such as the Fenton-Wilkinson method and the Schwartz-Yeh method, which have their own respective short-comings, the proposed method provides the parametric flexibility to accurately approximate different portions of the lognormal sum distribution. The accuracy of the method is measured both visually, as has been done in the literature, as well as quantitatively, using curve-fitting metrics. An upper bound on the sensitivity of the method is also provided.

356 citations


Journal ArticleDOI
TL;DR: The paper shows how an easily computed upper bound can be used as a pair-selection criterion which avoids the anomalies of the earlier approaches and proposes that a key consideration should be the Kullback-Leibler (KL) discrimination of the reduced mixture with respect to the original mixture.
Abstract: A common problem in multi-target tracking is to approximate a Gaussian mixture by one containing fewer components; similar problems can arise in integrated navigation. A common approach is successively to merge pairs of components, replacing the pair with a single Gaussian component whose moments up to second order match those of the merged pair. Salmond [1] and Williams [2, 3] have each proposed algorithms along these lines, but using different criteria for selecting the pair to be merged at each stage. The paper shows how under certain circumstances each of these pair-selection criteria can give rise to anomalous behaviour, and proposes that a key consideration should the the Kullback-Leibler (KL) discrimination of the reduced mixture with respect to the original mixture. Although computing this directly would normally be impractical, the paper shows how an easily computed upper bound can be used as a pair-selection criterion which avoids the anomalies of the earlier approaches. The behaviour of the three algorithms is compared using a high-dimensional example drawn from terrain-referenced navigation.

345 citations


Journal ArticleDOI
TL;DR: A relaxation method is described which yields an easily computable upper bound on the optimal solution of portfolio selection, and a heuristic method for finding a suboptimal portfolio which is based on solving a small number of convex optimization problems.
Abstract: We consider the problem of portfolio selection, with transaction costs and constraints on exposure to risk. Linear transaction costs, bounds on the variance of the return, and bounds on different shortfall probabilities are efficiently handled by convex optimization methods. For such problems, the globally optimal portfolio can be computed very rapidly. Portfolio optimization problems with transaction costs that include a fixed fee, or discount breakpoints, cannot be directly solved by convex optimization. We describe a relaxation method which yields an easily computable upper bound via convex optimization. We also describe a heuristic method for finding a suboptimal portfolio, which is based on solving a small number of convex optimization problems (and hence can be done efficiently). Thus, we produce a suboptimal solution, and also an upper bound on the optimal solution. Numerical experiments suggest that for practical problems the gap between the two is small, even for large problems involving hundreds of assets. The same approach can be used for related problems, such as that of tracking an index with a portfolio consisting of a small number of assets.

344 citations


Journal ArticleDOI
TL;DR: Some improved delay/interval-dependent stability criteria for NNs with time-varying interval delay are proposed andumerical examples are given to demonstrate the effectiveness and the merits of the proposed method.
Abstract: This letter is concerned with the stability analysis of neural networks (NNs) with time-varying interval delay. The relationship between the time-varying delay and its lower and upper bounds is taken into account when estimating the upper bound of the derivative of Lyapunov functional. As a result, some improved delay/interval-dependent stability criteria for NNs with time-varying interval delay are proposed. Numerical examples are given to demonstrate the effectiveness and the merits of the proposed method.

318 citations


Journal ArticleDOI
TL;DR: Joint multicell processing is shown to eliminate out-of-cell interference, which is traditionally considered to be a limiting factor in high-rate reliable communications.
Abstract: The sum-rate capacity of a cellular system model is analyzed, considering the uplink and downlink channels, while addressing both nonfading and flat-fading channels. The focus is on a simple Wyner-like multicell model, where the system cells are arranged on a circle, and the cell sites are located at the boundaries of the cells. For the uplink channel, analytical expressions of the sum-rate capacities are derived for intra-cell time-division multiple-access (TDMA) scheduling, and a "wideband" (WB) scheme (where all users are active simultaneously utilizing all bandwidths for coding). Assuming individual equal per-cell power constraints, and using the Lagrangian uplink-downlink duality principle, an analytical expression for the sum-rate capacity of the downlink channel is derived for nonfading channels, and shown to coincide with the corresponding uplink result. Introducing flat-fading, lower and upper bounds on the average per-cell ergodic sum-rate capacity are derived. The bounds exhibit an O(loge K) multiuser diversity factor for a number of users per cell K Gt 1, in addition to the array diversity gain. Joint multicell processing is shown to eliminate out-of-cell interference, which is traditionally considered to be a limiting factor in high-rate reliable communications.

Posted Content
TL;DR: A deterministic channel model which captures several key features of multiuser wireless communication is presented, and an exact characterization of the end-to-end capacity when there is a single source and a single destination and an arbitrary number of relay nodes is presented.
Abstract: We present a deterministic channel model which captures several key features of multiuser wireless communication. We consider a model for a wireless network with nodes connected by such deterministic channels, and present an exact characterization of the end-to-end capacity when there is a single source and a single destination and an arbitrary number of relay nodes. This result is a natural generalization of the max-flow min-cut theorem for wireline networks. Finally to demonstrate the connections between deterministic model and Gaussian model, we look at two examples: the single-relay channel and the diamond network. We show that in each of these two examples, the capacity-achieving scheme in the corresponding deterministic model naturally suggests a scheme in the Gaussian model that is within 1 bit and 2 bit respectively from cut-set upper bound, for all values of the channel gains. This is the first part of a two-part paper; the sequel [1] will focus on the proof of the max-flow min-cut theorem of a class of deterministic networks of which our model is a special case.

Journal ArticleDOI
TL;DR: Criteria for verifying robust stability are formulated as feasibility problems over a set of frequency-dependent linear matrix inequalities and can be equivalently formulated as semi-definite programs (SDP) using Kalman-Yakubovich-Popov lemma.

Journal ArticleDOI
TL;DR: In this paper, a three-slot sequential amplify-and-forward (SAF) scheme was proposed to exploit the potential diversity gain in the high multiplexing gain regime.
Abstract: In a slow-fading channel, how to find a cooperative diversity scheme that achieves the transmit diversity bound is still an open problem. In fact, all previously proposed amplify-and-forward (AF) and decode-and-forward (DF) schemes do not improve with the number of relays in terms of the diversity-multiplexing tradeoff (DMT) for multiplexing gains r higher than 0.5. In this work, the class of slotted amplify-and-forward (SAF) schemes is studied. First, an upper bound on the DMT for any SAF scheme with an arbitrary number of relays N and number of slots M is established. Then, a sequential SAF scheme that can exploit the potential diversity gain in the high multiplexing gain regime is proposed. More precisely, in certain conditions, the sequential SAF scheme achieves the proposed DMT upper bound which tends to the transmit diversity bound when M goes to infinity. In particular, for the two-relay case, the three-slot sequential SAF scheme achieves the proposed upper bound and outperforms the two-relay nonorthorgonal amplify-and-forward (NAF) scheme of Azarian for multiplexing gains r les 2/3. Numerical results reveal a significant gain of our scheme over the previously proposed AF schemes, especially in high spectral efficiency and large network size regime.

Journal ArticleDOI
TL;DR: In this paper, a branch-and-bound solution procedure was proposed to obtain feasible schedules with guaranteed optimality for a single-track train timetabling problem, subject to a set of operational and safety requirements.
Abstract: A single-track train timetabling problem is studied in order to minimize the total train travel time, subject to a set of operational and safety requirements. This research proposes a generalized resource-constrained project scheduling formulation which considers segment and station headway capacities as limited resources, and presents a branch-and-bound solution procedure to obtain feasible schedules with guaranteed optimality. The search algorithm chronologically adds precedence relation constraints between conflicting trains to eliminate conflicts, and the resulting sub-problems are solved by the longest path algorithm to determine the earliest start times for each train in different segments. This study adapts three approaches to effectively reduce the solution space. First, a Lagrangian relaxation based lower bound rule is used to dualize segment and station entering headway capacity constraints. Second, an exact lower bound rule is used to estimate the least train delay for resolving the remaining crossing conflicts in a partial schedule. Third, a tight upper bound is constructed by a beam search heuristic method. Comprehensive numerical experiments are conducted to illustrate the computational performance of the proposed lower bound rules and heuristic upper bound construction methods.

Journal ArticleDOI
TL;DR: The method, based on an Offline–Online strategy relevant in the reduced basis many-query and real-time context, reduces the Online calculation to a small Linear Program: the objective is a parametric expansion of the underlying Rayleigh quotient; the constraints reflect stability information at optimally selected parameter points.

Journal ArticleDOI
TL;DR: The results of this paper provide a unifying framework under which all these algorithms can be viewed and the link with VARX modeling have important implications as to computational complexity is concerned, leading to very computationally attractive implementations.

Journal ArticleDOI
TL;DR: In this article, the authors present a finite element implementation of the kinematic upper bound theorem that is novel in two main respects: first, it is shown that conventional linear strain elements (6-node triangle, 10-node tetrahedron) are suitable for obtaining strict upper bounds even in the case of cohesive-frictional materials, provided that the element sides are straight (or the faces planar) such that the strain field varies as a simplex.
Abstract: In geomechanics, limit analysis provides a useful method for assessing the capacity of structures such as footings and retaining walls, and the stability of slopes and excavations. This paper presents a finite element implementation of the kinematic (or upper bound) theorem that is novel in two main respects. First, it is shown that conventional linear strain elements (6-node triangle, 10-node tetrahedron) are suitable for obtaining strict upper bounds even in the case of cohesive-frictional materials, provided that the element sides are straight (or the faces planar) such that the strain field varies as a simplex. This is important because until now, the only way to obtain rigorous upper bounds has been to use constant strain elements combined with a discontinuous displacement field. It is well known (and confirmed here) that the accuracy of the latter approach is highly dependent on the alignment of the discontinuities, such that it can perform poorly if an unstructured mesh is employed. Second, the optimization of the displacement field is formulated as a standard second-order cone programming (SOCP) problem. Using a state-of-the-art SOCP code developed by researchers in mathematical programming, very large example problems are solved with outstanding speed. The examples concern plane strain and the Mohr-Coulomb criterion, but the same approach can be used in 3D with the Drucker-Prager criterion, and can readily be extended to other yield criteria having a similar conic quadratic form.

Proceedings ArticleDOI
11 Jun 2007
TL;DR: A stronger version of the adversary method, called ADV+-, was proposed in this article, which goes beyond this principle to make explicit use of the stronger condition that the algorithm actually computes the function.
Abstract: The quantum adversary method is one of the most successful techniques for proving lower bounds on quantum query complexity. It gives optimal lower bounds for many problems, has application to classical complexity in formula size lower bounds, and is versatile with equivalent formulations interms of weight schemes, eigen values, and Kolmogorov complexity. All these formulations rely on the principlethat if an algorithm successfully computes a function then, in particular, itis able to distinguish between inputs which map to different values.We present a stronger version of the adversary method which goes beyond this principle to make explicit use of the stronger condition that the algorithm actually computes the function. This new method, which we call ADV+-, has all the advantages ofthe old: it is a lower bound on bounded-error quantum query complexity, its square is a lower bound on formula size, and it behaves well with respect tofunction composition. Moreover ADV+- is always at least as large as the adversary method ADV, and we show an example of a monotone function forwhich ADV+-(f)=Omega(ADV(f)1.098). We also give examples showing that ADV+- does not face limitations of ADV like the certificate complexity barrier and the property testing barrier.

Journal ArticleDOI
TL;DR: In this paper, the abundance of the lightest (dark matter) sterile neutrinos created in the Early Universe due to active-sterile neutrino transitions from the thermal plasma was determined.
Abstract: We determine the abundance of the lightest (dark matter) sterile neutrinos created in the Early Universe due to active-sterile neutrino transitions from the thermal plasma. Our starting point is the field-theoretic formula for the sterile neutrino production rate, derived in our previous work [JHEP 06(2006)053], which allows to systematically incorporate all relevant effects, and also to analyse various hadronic uncertainties. Our numerical results differ moderately from previous computations in the literature, and lead to an absolute upper bound on the mixing angles of the dark matter sterile neutrino. Comparing this bound with existing astrophysical X-ray constraints, we find that the Dodelson-Widrow scenario, which proposes sterile neutrinos generated by active-sterile neutrino transitions to be the sole source of dark matter, is only possible for sterile neutrino masses lighter than 3.5 keV (6 keV if all hadronic uncertainties are pushed in one direction and the most stringent X-ray bounds are relaxed by a factor of two). This upper bound may conflict with a lower bound from structure formation, but a definitive conclusion necessitates numerical simulations with the non-equilibrium momentum distribution function that we derive. If other production mechanisms are also operative, no upper bound on the sterile neutrino mass can be established.

Proceedings Article
03 Dec 2007
TL;DR: This paper presents an algorithm which achieves O*(n3/2 √T) regret and presents lower bounds showing that this gap is at least √n, which is conjecture to be the correct order.
Abstract: In the online linear optimization problem, a learner must choose, in each round, a decision from a set D ⊂ ℝn in order to minimize an (unknown and changing) linear cost function. We present sharp rates of convergence (with respect to additive regret) for both the full information setting (where the cost function is revealed at the end of each round) and the bandit setting (where only the scalar cost incurred is revealed). In particular, this paper is concerned with the price of bandit information, by which we mean the ratio of the best achievable regret in the bandit setting to that in the full-information setting. For the full information case, the upper bound on the regret is O*( √nT), where n is the ambient dimension and T is the time horizon. For the bandit case, we present an algorithm which achieves O*(n3/2 √T) regret — all previous (nontrivial) bounds here were O(poly(n)T2/3) or worse. It is striking that the convergence rate for the bandit setting is only a factor of n worse than in the full information case — in stark contrast to the K-arm bandit setting, where the gap in the dependence on K is exponential (√TK vs. √T log K). We also present lower bounds showing that this gap is at least √n, which we conjecture to be the correct order. The bandit algorithm we present can be implemented efficiently in special cases of particular interest, such as path planning and Markov Decision Problems.

Journal ArticleDOI
TL;DR: A lower bound on the largest eigenvalue of the Karhunen-Loegraveve (KL) expansion of random multipath fields is established, which quantifies, to some extent, the well-known reduction of multipath richness with reducing the angular power spread of multipATH angular power spectrum.
Abstract: We study the dimensions or degrees of freedom of farfield multipath that is observed in a limited, source-free region of space. The multipath fields are studied as solutions to the wave equation in an infinite-dimensional vector space. We prove two universal upper bounds on the truncation error of fixed and random multipath fields. A direct consequence of the derived bounds is that both fixed and random multipath fields have an effective finite dimension. For circular and spherical spatial regions, we show that this finite dimension is proportional to the radius and area of the region, respectively. We use the Karhunen-Loegraveve (KL) expansion of random multipath fields to quantify the notion of multipath richness. The multipath richness is defined as the number of significant eigenvalues in the KL expansion that achieve 99% of the total multipath energy. We establish a lower bound on the largest eigenvalue. This lower bound quantifies, to some extent, the well-known reduction of multipath richness with reducing the angular power spread of multipath angular power spectrum

Journal ArticleDOI
TL;DR: A survey of upper bounds presented in the literature is given, and the relative tightness of several of the bounds shown are compared with respect to strength and computational effort.

Journal ArticleDOI
TL;DR: In this paper, an upper bound on simple quantum hypothesis testing in the asymmetric setting is shown using a useful inequality by Audenaert et al. using this upper bound, they obtain the Hoeffding bound which is identical with the classical counterpart if the hypotheses, composed of two density operators, are mutually commutative.
Abstract: An upper bound on simple quantum hypothesis testing in the asymmetric setting is shown using a useful inequality by Audenaert et al. [Phys. Rev. Lett. 98, 160501 (2007)] which was originally invented for symmetric setting. Using this upper bound, we obtain the Hoeffding bound, which is identical with the classical counterpart if the hypotheses, composed of two density operators, are mutually commutative. Its attainability has been a long-standing open problem. Further, using this bound, we obtain a better exponential upper bound of the average error probability of classical-quantum channel coding.

Proceedings ArticleDOI
01 May 2007
TL;DR: The throughput benefit ratio under the physical model is also bounded by a constant; thus, it is shown for both the protocol and physical model that the coding benefit in terms of throughput is a constant factor.
Abstract: Gupta and Kumar established that the per node throughput of ad hoc networks with multi-pair unicast traffic scales (poorly) as lambda(n) = Theta (1 / radic(n log n)) with an increasing number of nodes n. However, Gupta and Kumar did not consider the possibility of network coding and broadcasting in their model, and recent work has suggested that such techniques have the potential to greatly improve network throughput. In [1], we have shown that for the protocol communication model of Gupta and Kumar [2], the multi-unicast throughput of schemes using arbitrary network coding and broadcasting in a two-dimensional random topology also scales as lambda(n) = Theta (1 / radic(n log n))1, thus showing that network coding provides no order difference improvement on throughput. Of course, in practice the constant factor of improvement is important; thus, here we derive bounds for the throughput benefit ratio -the ratio of the throughput of the optimal network coding scheme to the throughput of the optimal non-coding flow scheme. We show that the improvement factor is 1+ Delta / 1+Delta /2for 1D random networks, where Delta > 0 is a parameter of the wireless medium that characterizes the intensity of the interference. We obtain this by giving tight bounds (both upper and lower) on the throughput of the coding and flow schemes. For 2D networks, we obtain an upper bound for the throughput benefit ratio as alpha (n) les 2cDelta radic(pi = 1+Delta/Delta) for large n, wnere cDelta = max {2, radic(Delta2 + 2Delta)}. This is obtained by finding an upper bound for the coding throughput and a lower bound for the flow throughput. We then consider the more general physical communication model as in Gupta and Kumar. We show that the coding scheme throughput in this case is upper bounded by Theta (1/n) for the 1D random network and by Theta(1/radic(n)) for the 2D case. We also show the flow scheme throughput for the ID case can achieve the same order throughput as the coding scheme. Combined with previous work on a 2D lower bound [3], we conclude that the throughput benefit ratio under the physical model is also bounded by a constant; thus, we have shown for both the protocol and physical model that the coding benefit in terms of throughput is a constant factor. Finally, we evaluate the potential coding gain from another important perspective - total energy efficiency - and show that the factor by which the total energy is decreased is upper bounded by 3.

Journal ArticleDOI
TL;DR: A lower bound on the quantum accuracy threshold, 1.94 x 10(-4) for adversarial stochastic noise, is proved, that improves previous lower bounds by nearly an order of magnitude.
Abstract: We discuss how the presence of gauge subsystems in the Bacon-Shor code [D. Bacon, Phys. Rev. A 73, 012340 (2006)10.1103/PhysRevA.73.012340 (2006)] leads to remarkably simple and efficient methods for fault-tolerant error correction (FTEC). Most notably, FTEC does not require entangled ancillary states, and it can be implemented with nearest-neighbor two-qubit measurements. By using these methods, we prove a lower bound on the quantum accuracy threshold, 1.94 x 10(-4) for adversarial stochastic noise, that improves previous lower bounds by nearly an order of magnitude.

Journal ArticleDOI
TL;DR: This work revisits a classical load balancing problem in the modern context of decentralized systems and self-interested clients and proves nearly tight bounds on the price of anarchy (worst-case ratio between a Nash solution and the social optimum) for linear latency functions.
Abstract: We revisit a classical load balancing problem in the modern context of decentralized systems and self-interested clients. In particular, there is a set of clients, each of whom must choose a server from a permissible set. Each client has a unit-length job and selfishly wants to minimize its own latency (job completion time). A server's latency is inversely proportional to its speed, but it grows linearly with (or, more generally, as the pth power of) the number of clients matched to it. This interaction is naturally modeled as an atomic congestion game, which we call selfish load balancing. We analyze the Nash equilibria of this game and prove nearly tight bounds on the price of anarchy (worst-case ratio between a Nash solution and the social optimum). In particular, for linear latency functions, we show that if the server speeds are relatively bounded and the number of clients is large compared with the number of servers, then every Nash assignment approaches social optimum. Without any assumptions on the number of clients, servers, and server speeds, the price of anarchy is at most 2.5. If all servers have the same speed, then the price of anarchy further improves to $1 + 2/\sqrt{3} \approx 2.15.$ We also exhibit a lower bound of 2.01. Our proof techniques can also be adapted for the coordinated load balancing problem under L2 norm, where it slightly improves the best previously known upper bound on the competitive ratio of a simple greedy scheme.

Proceedings ArticleDOI
01 May 2007
TL;DR: Simulation results show that solutions obtained by this algorithm are very close to lower bounds obtained via relaxation, thus suggesting that the solution produced by the algorithm is near-optimal.
Abstract: Software defined radio (SDR) capitalizes advances in signal processing and radio technology and is capable of reconfiguring RF and switching to desired frequency bands. It is a frequency-agile data communication device that is vastly more powerful than recently proposed multi-channel multi-radio (MC-MR) technology. In this paper, we investigate the important problem of multi-hop networking with SDR nodes. For such network, each node has a pool of frequency bands (not necessarily of equal size) that can be used for communication. The uneven size of bands in the radio spectrum prompts the need of further division into sub-bands for optimal spectrum sharing. We characterize behaviors and constraints for such multi-hop SDR network from multiple layers, including modeling of spectrum sharing and sub-band division, scheduling and interference constraints, and flow routing. We give a formal mathematical formulation with the objective of minimizing the required network-wide radio spectrum resource for a set of user sessions. Since such problem formulation falls into mixed integer non-linear programming (MINLP), which is NP-hard in general, we develop a lower bound for the objective by relaxing the integer variables and linearization. Subsequently, we develop a near-optimal algorithm to this MINLP problem. This algorithm is based on a novel sequential fixing procedure, where the integer variables are determined iteratively via a sequence of linear programming. Simulation results show that solutions obtained by this algorithm are very close to lower bounds obtained via relaxation, thus suggesting that the solution produced by the algorithm is near-optimal.

Journal ArticleDOI
TL;DR: This paper develops the a posteriori error estimation of hp-version interior penalty discontinuous Galerkin discretizations of elliptic boundary-value problems and derivesable upper and lower bounds on the error measured in terms of a natural energy norm.
Abstract: In this paper, we develop the a posteriori error estimation of hp-version interior penalty discontinuous Galerkin discretizations of elliptic boundary-value problems. Computable upper and lower bounds on the error measured in terms of a natural (mesh-dependent) energy norm are derived. The bounds are explicit in the local mesh sizes and approximation orders. A series of numerical experiments illustrate the performance of the proposed estimators within an automatic hp-adaptive refinement procedure.

Journal ArticleDOI
TL;DR: With a substantially enlarged rate parameter, CSL effects may be within range of experimental detection (or refutation) with current technologies, and modifications in the analysis corresponding to a larger value of rC are discussed.
Abstract: We study lower and upper bounds on the parameters for stochastic state vector reduction, focusing on the mass-proportional continuous spontaneous localization (CSL) model We show that the assumption that the state vector is reduced when a latent image is formed, in photography or etched track detection, requires a CSL reduction rate parameter ? that is larger than conventionally assumed by a factor of roughly 2 ? 109?2, for a correlation length rC of 10?5cm We reanalyse existing upper bounds on the reduction rate and conclude that all are compatible with such an increase in ? The best bounds that we have obtained come from a consideration of heating of the intergalactic medium (IGM), which shows that ? can be at most ~108?1 times as large as the standard CSL value, again for rC = 10?5cm (For both the lower and upper bounds, quoted errors are not purely statistical errors, but rather are estimates reflecting modelling uncertainties) We discuss modifications in our analysis corresponding to a larger value of rC With a substantially enlarged rate parameter, CSL effects may be within range of experimental detection (or refutation) with current technologies