scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 2006"


Journal ArticleDOI
TL;DR: The sequential tree-reweighted message passing (STE-TRW) algorithm as discussed by the authors is a modification of Tree-Reweighted Maximum Product Message Passing (TRW), which was proposed by Wainwright et al.
Abstract: Algorithms for discrete energy minimization are of fundamental importance in computer vision. In this paper, we focus on the recent technique proposed by Wainwright et al. (Nov. 2005)- tree-reweighted max-product message passing (TRW). It was inspired by the problem of maximizing a lower bound on the energy. However, the algorithm is not guaranteed to increase this bound - it may actually go down. In addition, TRW does not always converge. We develop a modification of this algorithm which we call sequential tree-reweighted message passing. Its main property is that the bound is guaranteed not to decrease. We also give a weak tree agreement condition which characterizes local maxima of the bound with respect to TRW algorithms. We prove that our algorithm has a limit point that achieves weak tree agreement. Finally, we show that, our algorithm requires half as much memory as traditional message passing approaches. Experimental results demonstrate that on certain synthetic and real problems, our algorithm outperforms both the ordinary belief propagation and tree-reweighted algorithm in (M. J. Wainwright, et al., Nov. 2005). In addition, on stereo problems with Potts interactions, we obtain a lower energy than graph cuts

1,116 citations


Posted Content
TL;DR: In this article, the authors considered the secure transmission of information over an ergodic fading channel in the presence of an eavesdropper and characterized the secrecy capacity of such a system under the assumption of asymptotically long coherence intervals.
Abstract: We consider the secure transmission of information over an ergodic fading channel in the presence of an eavesdropper. Our eavesdropper can be viewed as the wireless counterpart of Wyner's wiretapper. The secrecy capacity of such a system is characterized under the assumption of asymptotically long coherence intervals. We first consider the full Channel State Information (CSI) case, where the transmitter has access to the channel gains of the legitimate receiver and the eavesdropper. The secrecy capacity under this full CSI assumption serves as an upper bound for the secrecy capacity when only the CSI of the legitimate receiver is known at the transmitter, which is characterized next. In each scenario, the perfect secrecy capacity is obtained along with the optimal power and rate allocation strategies. We then propose a low-complexity on/off power allocation strategy that achieves near-optimal performance with only the main channel CSI. More specifically, this scheme is shown to be asymptotically optimal as the average SNR goes to infinity, and interestingly, is shown to attain the secrecy capacity under the full CSI assumption. Remarkably, our results reveal the positive impact of fading on the secrecy capacity and establish the critical role of rate adaptation, based on the main channel CSI, in facilitating secure communications over slow fading channels.

732 citations


Book ChapterDOI
15 Aug 2006
TL;DR: An online deterministic algorithm named BAR is presented and it is proved that it is 4.56-competitive, which improves the previous algorithm of Kim and Chwa which was shown to be 5-competitive by Chan et al.
Abstract: We study an on-line broadcast scheduling problem in which requests have deadlines, and the objective is to maximize the weighted throughput, i.e., the weighted total length of the satisfied requests. For the case where all requested pages have the same length, we present an online deterministic algorithm named BAR and prove that it is 4.56-competitive. This improves the previous algorithm of Kim and Chwa [11] which is shown to be 5-competitive by Chan et al. [4]. In the case that pages may have different lengths, we prove a lower bound of Ω(Δ/logΔ) on the competitive ratio where Δ is the ratio of maximum to minimum page lengths. This improves upon the previous $\sqrt{\Delta}$ lower bound in [11,4] and is much closer to the current upper bound of ($\Delta+2\sqrt{\Delta}+2$) in [7]. Furthermore, for small values of Δ we give better lower bounds.

669 citations


Journal ArticleDOI
TL;DR: The information rate of finite-state source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum-product recursion on the joint source/ channel trellis.
Abstract: The information rate of finite-state source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum-product recursion on the joint source/channel trellis. This method is extended to compute upper and lower bounds on the information rate of very general channels with memory by means of finite-state approximations. Further upper and lower bounds can be computed by reduced-state methods

598 citations


Journal ArticleDOI
TL;DR: A residual test (RT) is proposed that can simultaneously determine the number of line-of-sight (LOS) BS and identify them and then, localization can proceed with only those LOS BS.
Abstract: Three or more base stations (BS) making time-of-arrival measurements of a signal from a mobile station (MS) can locate the MS. However, when some of the measurements are from non-line-of-sight (NLOS) paths, the location errors can be very large. This paper proposes a residual test (RT) that can simultaneously determine the number of line-of-sight (LOS) BS and identify them. Then, localization can proceed with only those LOS BS. The RT works on the principle that when all measurements are LOS, the normalized residuals have a central Chi-Square distribution, versus a noncentral distribution when there is NLOS. The residuals are the squared differences between the estimates and the true position. Normalization by their variances gives a unity variance to the resultant random variables. In simulation studies, for the chosen geometry and NLOS and measurement noise errors, the RT can determine the correct number of LOS-BS over 90% of the time. For four or more BS, where there are at least three LOS-BS, the estimator has variances that are near the Cramer--Rao lower bound.

485 citations


Journal ArticleDOI
TL;DR: In this article, the impact of flavour in thermal leptogenesis, including the quantum oscillations of the asymmetries in lepton flavour space, has been studied, and it has been shown that when flavour dynamics are included, there is no model-independent limit on the light neutrino mass scale, and that the lower bound on the reheat temperature is relaxed by a factor
Abstract: We study the impact of flavour in thermal leptogenesis, including the quantum oscillations of the asymmetries in lepton flavour space. In the Boltzmann equations we find different numerical factors and additional terms which can affect the results significantly. The upper bound on the CP asymmetry in a specific flavour is weaker than the bound on the sum. This suggests that -- when flavour dynamics is included -- there is no model-independent limit on the light neutrino mass scale,and that the lower bound on the reheat temperature is relaxed by a factor ~ (3 - 10).

449 citations


Journal ArticleDOI
20 Jan 2006-Sensors
TL;DR: It is shown that the lower bound of the send-on-delta effectiveness is independent of the sampling resolution, and constitutes the built-in feature of the input signal.
Abstract: : The paper addresses the issue of the send-on-delta data collecting strategy to capture information from the environment. Send-on-delta concept is the signal-dependent temporal sampling scheme, where the sampling is triggered if the signal deviates by delta defined as the significant change of its value. It is an attractive scheme for wireless sensor networking due to effective energy consumption. The quantitative evaluations of send-on-delta scheme for a general type continuous-time bandlimited signal are presented in the paper. The bounds on the mean traffic of reports for a given signal, and assumed sampling resolution, are evaluated. Furthermore, the send-on-delta effectiveness, defined as the reduction of the mean rate of reports in comparison to the periodic sampling for a given resolution, is derived. It is shown that the lower bound of the send-on-delta effectiveness (i.e. the guaranteed reduction) is independent of the sampling resolution, and constitutes the built-in feature of the input signal. The calculation of the effectiveness for standard signals, that model the state evolution of dynamic environment in time, is exemplified. Finally, the example of send-on-delta programming is shown.

446 citations


Journal ArticleDOI
TL;DR: A new version of the quantum threshold theorem is proved that applies to concatenation of a quantum code that corrects only one error, and this theorem is used to derive arigorous lower bound on the quantum accuracy threshold e0, the best lower bound that has been rigorously proven so far.
Abstract: We prove a new version of the quantum threshold theorem that applies to concatenationof a quantum code that corrects only one error, and we use this theorem to derive arigorous lower bound on the quantum accuracy" threshold e0. Our proof also appliesto concatenation of higher-distance codes, and to noise models that allow faults to becorrelated in space and in time. The proof uses new criteria for assessing the accuracy" offault-tolerant circuits, which are particularly conducive to the inductive analysis of recur-sire simulations. Our lower bound on the threshold, e0 ≥ 2.73 × 10-5 for an adversarialindependent stochastic noise model, is derived from a computer-assisted combinatorialanaly sis; it is the best lower bound that has been rigorously proven so far.

440 citations


Journal ArticleDOI
Lorenzo Amati1
TL;DR: The E p,i -E iso correlation between the cosmological rest-frame vF v spectrum peak energy and the isotropic-equivalent radiated energy, E iso, discovered by Amati et al. as discussed by the authors, is one of the most intriguing and debated observational evidences in gamma-ray burst (GRB) astrophysics.
Abstract: The correlation between the cosmological rest-frame vF v spectrum peak energy, E p,i , and the isotropic-equivalent radiated energy, E iso , discovered by Amati et al. in 2002 and confirmed/extended by subsequent osbervations, is one of the most intriguing and debated observational evidences in gamma-ray burst (GRB) astrophysics. In this paper, I provide an update and a re-analysis of the E p,i -E iso correlation basing on an updated sample consisting of 41 long GRBs/X-ray flashes (XRFs) with firm estimates of z and observed peak energy, E p,obs , 12 GRB s with uncertain values of z and/or E p,obs , two short GRBs with firm estimates of z and E p,obs and the peculiar subenergetic events GRB 980425/SN1998bw and GRB 031203/SN20031w. In addition to standard correlation analysis and power-law fitting, the data analysis here reported includes modelling that accounts for sample variance. All 53 classical long GRBs and XRFs, including 11 Swift events with published spectral parameters and fluences, have E p,i and E iso values, or upper/lower limits, consistent with the correlation, which shows a chance probability as low as ∼7 x 10 -15 , a slope of ∼0.57 (∼0.5 when fitting by accounting for sample variance) and an extra-Poissonian logarithmic dispersion of ∼0.15, it extends over ∼5 orders of magnitude in E iso and ∼3 orders of magnitude in E p,i and holds from the closer to the higher z GRBs. Subenergetic GRBs (980425 and possibly 031203) and short GRBs are found to be inconsistent with the E p,i -E iso correlation, showing that it can be a powerful tool for discriminating different classes of GRBs and understanding their nature and differences. I also discuss the main implications of the updated E p,i -E iso correlation for the models of the physics and geometry of GRB emission, its use as a pseudo-redshift estimator and the tests of possible selection effects with GRBs of unknown redshift.

404 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied an empirical risk minimization problem, where the goal is to obtain very general upper bounds on the excess risk of a class of measurable functions, expressed in terms of relevant geometric parameters of the class.
Abstract: Let ℱ be a class of measurable functions f:S↦[0, 1] defined on a probability space (S, $\mathcal{A}$, P). Given a sample (X1, …, Xn) of i.i.d. random variables taking values in S with common distribution P, let Pn denote the empirical measure based on (X1, …, Xn). We study an empirical risk minimization problem Pnf→min , f∈ℱ. Given a solution fn of this problem, the goal is to obtain very general upper bounds on its excess risk $$\mathcal{E}_{P}(\hat{f}_{n}):=P\hat{f}_{n}-\inf_{f\in \mathcal{F}}Pf,$$ expressed in terms of relevant geometric parameters of the class ℱ. Using concentration inequalities and other empirical processes tools, we obtain both distribution-dependent and data-dependent upper bounds on the excess risk that are of asymptotically correct order in many examples. The bounds involve localized sup-norms of empirical and Rademacher processes indexed by functions from the class. We use these bounds to develop model selection techniques in abstract risk minimization problems that can be applied to more specialized frameworks of regression and classification.

381 citations


Proceedings ArticleDOI
05 Jun 2006
TL;DR: It is demonstrated that the worst-case running time of the k-means method is superpolynomial by improving the best known lower bound from Ω(n) iterations to 2Ω(√n).
Abstract: The k-means method is an old but popular clustering algorithm known for its observed speed and its simplicity. Until recently, however, no meaningful theoretical bounds were known on its running time. In this paper, we demonstrate that the worst-case running time of k-means is superpolynomial by improving the best known lower bound from Ω(n) iterations to 2Ω(√n).

Journal ArticleDOI
TL;DR: In this article, the authors present a method for obtaining strict lower bound solutions using second-order cone programming (SOCP), for which efficient primal-dual interior-point algorithms have been developed.
Abstract: The formulation of limit analysis by means of the finite element method leads to an optimization problem with a large number of variables and constraints. Here we present a method for obtaining strict lower bound solutions using second-order cone programming (SOCP), for which efficient primal-dual interior-point algorithms have recently been developed. Following a review of previous work, we provide a brief introduction to SOCP and describe how lower bound limit analysis can be formulated in this way. Some methods for exploiting the data structure of the problem are also described, including an efficient strategy for detecting and removing linearly dependent constraints at the assembly stage. The benefits of employing SOCP are then illustrated with numerical examples. Through the use of an effective algorithm/software, very large optimization problems with up to 700 000 variables are solved in minutes on a desktop machine. The numerical examples concern plane strain conditions and the Mohr–Coulomb criterion, however we show that SOCP can also be applied to any other problem of lower bound limit analysis involving a yield function with a conic quadratic form (notable examples being the Drucker–Prager criterion in 2D or 3D, and Nielsen's criterion for plates). Copyright © 2005 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, a semi-empirical donor sequence for CVs with orbital periods less than 6 hours is presented. But the spectral types of the donor sequence are not directly compared to the observed SpTs.
Abstract: We construct a complete, semi-empirical donor sequence for CVs with orbital periods less than 6 hrs. All key physical and photometric parameters of CV secondaries (along with their spectral types) are given as a function of P_orb along this sequence. The main observational basis for our donor sequence is an empirical mass-radius relation for CV secondaries. We present an optimal estimate for this relation that ensures consistency with the observed locations of the period gap and the period minimum. We also present new determinations of these periods, finding P_{gap, upper edge} = 3.18 +/- 0.04 hr, P_{gap, lower edge} = 2.15 +/- 0.03 hr and P_{min} = 76.2 +/- 1.0 min. We then test the donor sequence by comparing observed and predicted spectral types (SpTs) as a function of orbital period. To this end, we update the SpT compilation of Beuermann et al. and show explicitly that CV donors have later SpTs than main sequence (MS) stars at all orbital periods. The semi-empirical donor sequence matches the observed SpTs very well, implying that the empirical M2-R2 relation predicts just the right amount of radius expansion. We finally apply the donor sequence to the problem of distance estimation. Based on a sample of 22 CVs with trigonometric parallaxes, we show that the donor sequence correctly traces the envelope of the observed M_{JHK}-P_{orb} distribution. Thus robust lower limits on distances can be obtained from single-epoch infrared observations.

Journal ArticleDOI
Asmaa Abada, Sacha Davidson, F-X.J. Michaux, Marta Losada, Antonio Riotto1 
TL;DR: In this paper, the impact of flavour in thermal leptogenesis, including the quantum oscillations of the asymmetries in lepton flavour space, has been studied, and it has been shown that when flavour dynamics are included, there is no model-independent limit on the light neutrino mass scale, and that the lower bound on the reheat temperature is relaxed by a factor
Abstract: We study the impact of flavour in thermal leptogenesis, including the quantum oscillations of the asymmetries in lepton flavour space. In the Boltzmann equations we find different numerical factors and additional terms which can affect the results significantly. The upper bound on the CP asymmetry in a specific flavour is weaker than the bound on the sum. This suggests that -- when flavour dynamics is included -- there is no model-independent limit on the light neutrino mass scale,and that the lower bound on the reheat temperature is relaxed by a factor ~ (3 - 10).

Journal ArticleDOI
TL;DR: In this article, two strategies for stabilization of discrete time linear switched systems were proposed, one of open loop nature (trajectory independent) and the other of closed loop nature based on the solution of what we call Lyapunov-Metzler inequalities.
Abstract: This paper addresses two strategies for stabilization of discrete time linear switched systems. The first one is of open loop nature (trajectory independent) and is based on the determination of an upper bound of the minimum dwell time by means of a family of quadratic Lyapunov functions. The relevant point on dwell time calculation is that the proposed stability condition does not require the Lyapunov function be uniformly decreasing at every switching time. The second one is of closed loop nature (trajectory dependent) and is designed from the solution of what we call Lyapunov–Metzler inequalities from which the stability condition is expressed. Being non-convex, a more conservative but simpler to solve version of the Lyapunov–Metzler inequalities is provided. The theoretical results are illustrated by means of examples.

Journal ArticleDOI
TL;DR: This paper derives a closed-form approximate solution to the ML equations, which is near optimal, attaining the theoretical lower bound for different geometries, and are superior to two other closed form linear estimators.
Abstract: Sensors at separate locations measuring either the time difference of arrival (TDOA) or time of arrival (TOA) of the signal from an emitter can determine its position as the intersection of hyperbolae for TDOA and of circles for TOA. Because of measurement noise, the nonlinear localization equations become inconsistent; and the hyperbolae or circles no longer intersect at a single point. It is now necessary to find an emitter position estimate that minimizes its deviations from the true position. Methods that first linearize the equations and then perform gradient searches for the minimum suffer from initial condition sensitivity and convergence difficulty. Starting from the maximum likelihood (ML) function, this paper derives a closed-form approximate solution to the ML equations. When there are three sensors on a straight line, it also gives an exact ML estimate. Simulation experiments have demonstrated that these algorithms are near optimal, attaining the theoretical lower bound for different geometries, and are superior to two other closed form linear estimators.

Journal ArticleDOI
TL;DR: Two coding schemes that do not require the relay to decode any part of the message are investigated and it is shown that the "side-information coding scheme" can outperform the block-Markov coding scheme and the achievable rate of the side- information coding scheme can be improved via time sharing.
Abstract: Upper and lower bounds on the capacity and minimum energy-per-bit for general additive white Gaussian noise (AWGN) and frequency-division AWGN (FD-AWGN) relay channel models are established. First, the max-flow min-cut bound and the generalized block-Markov coding scheme are used to derive upper and lower bounds on capacity. These bounds are never tight for the general AWGN model and are tight only under certain conditions for the FD-AWGN model. Two coding schemes that do not require the relay to decode any part of the message are then investigated. First, it is shown that the "side-information coding scheme" can outperform the block-Markov coding scheme. It is also shown that the achievable rate of the side-information coding scheme can be improved via time sharing. In the second scheme, the relaying functions are restricted to be linear. The problem is reduced to a "single-letter" nonconvex optimization problem for the FD-AWGN model. The paper also establishes a relationship between the minimum energy-per-bit and capacity of the AWGN relay channel. This relationship together with the lower and upper bounds on capacity are used to establish corresponding lower and upper bounds on the minimum energy-per-bit that do not differ by more than a factor of 1.45 for the FD-AWGN relay channel model and 1.7 for the general AWGN model.

Journal ArticleDOI
TL;DR: It is proved that the NP-hard distinguishing substring selection problem has no polynomial time approximation schemes of running time f(1/@e)n^o^(^1^/^@e^) for any function f unless an unlikely collapse occurs in parameterized complexity theory.

Journal ArticleDOI
TL;DR: An improved version of the FastICA algorithm is proposed which is asymptotically efficient, i.e., its accuracy given by the residual error variance attains the Cramer-Rao lower bound (CRB).
Abstract: FastICA is one of the most popular algorithms for independent component analysis (ICA), demixing a set of statistically independent sources that have been mixed linearly. A key question is how accurate the method is for finite data samples. We propose an improved version of the FastICA algorithm which is asymptotically efficient, i.e., its accuracy given by the residual error variance attains the Cramer-Rao lower bound (CRB). The error is thus as small as possible. This result is rigorously proven under the assumption that the probability distribution of the independent signal components belongs to the class of generalized Gaussian (GG) distributions with parameter alpha, denoted GG(alpha) for alpha>2. We name the algorithm efficient FastICA (EFICA). Computational complexity of a Matlab implementation of the algorithm is shown to be only slightly (about three times) higher than that of the standard symmetric FastICA. Simulations corroborate these claims and show superior performance of the algorithm compared with algorithm JADE of Cardoso and Souloumiac and nonparametric ICA of Boscolo on separating sources with distribution GG(alpha) with arbitrary alpha, as well as on sources with bimodal distribution, and a good performance in separating linearly mixed speech signals

01 Jan 2006
TL;DR: In this paper, a new exact algorithm for the Capacitated Vehicle Routing Problem (CVRP) based on the set partitioning formulation with additional cuts that correspond to capacity and clique inequalities is presented.
Abstract: This paper presents a new exact algorithm for the Capacitated Vehicle Routing Problem (CVRP) based on the set partitioning formulation with additional cuts that correspond to capacity and clique inequalities. The exact algorithm uses a bounding procedure that finds a near optimal dual solution of the LP-relaxation of the resulting mathematical formulation by combining three dual ascent heuristics. The first dual heuristic is based on the q-route relaxation of the set partitioning formulation of the CVRP. The second one combines Lagrangean relaxation, pricing and cut generation. The third attempts to close the duality gap left by the first two procedures using a classical pricing and cut generation technique. The final dual solution is used to generate a reduced problem containing only the routes whose reduced costs are smaller than the gap between an upper bound and the lower bound achieved. The resulting problem is solved by an integer programming solver. Computational results over the main instances from the literature show the effectiveness of the proposed algorithm.

Posted Content
TL;DR: In this paper, the authors considered the Gaussian Multiple Access Wire-Tap Channel (GMAC-WT) where multiple users communicate with an intended receiver in the presence of an intelligent and informed wire-tapper who receives a degraded version of the signal at the receiver.
Abstract: We consider the Gaussian Multiple Access Wire-Tap Channel (GMAC-WT). In this scenario, multiple users communicate with an intended receiver in the presence of an intelligent and informed wire-tapper who receives a degraded version of the signal at the receiver. We define suitable security measures for this multi-access environment. Using codebooks generated randomly according to a Gaussian distribution, achievable secrecy rate regions are identified using superposition coding and TDMA coding schemes. An upper bound for the secrecy sum-rate is derived, and our coding schemes are shown to achieve the sum capacity. Numerical results showing the new rate region are presented and compared with the capacity region of the Gaussian Multiple-Access Channel (GMAC) with no secrecy constraints, quantifying the price paid for secrecy.

Journal ArticleDOI
TL;DR: A new method is provided by introducing some free-weighting matrices and employing the lower bound of time-varying delay and based on the Lyapunov-Krasovskii functional method, sufficient condition for the asymptotical stability of the system is obtained.

Journal ArticleDOI
TL;DR: A signal intensity based maximum-likelihood target location estimator that uses quantized data is proposed for wireless sensor networks (WSNs) and is much more accurate than the heuristic weighted average methods and can reach the CRLB even with a relatively small amount of data.
Abstract: A signal intensity based maximum-likelihood (ML) target location estimator that uses quantized data is proposed for wireless sensor networks (WSNs). The signal intensity received at local sensors is assumed to be inversely proportional to the square of the distance from the target. The ML estimator and its corresponding Crameacuter-Rao lower bound (CRLB) are derived. Simulation results show that this estimator is much more accurate than the heuristic weighted average methods, and it can reach the CRLB even with a relatively small amount of data. In addition, the optimal design method for quantization thresholds, as well as two heuristic design methods, are presented. The heuristic design methods, which require minimum prior information about the system, prove to be very robust under various situations

Proceedings ArticleDOI
14 Mar 2006
TL;DR: This paper introduces variance shadow maps, a new real time shadowing algorithm that stores the mean and mean squared of a distribution of depths, from which it can efficiently compute the variance over any filter region.
Abstract: Shadow maps are a widely used shadowing technique in real time graphics. One major drawback of their use is that they cannot be filtered in the same way as color textures, typically leading to severe aliasing. This paper introduces variance shadow maps, a new real time shadowing algorithm. Instead of storing a single depth value, we store the mean and mean squared of a distribution of depths, from which we can efficiently compute the variance over any filter region. Using the variance, we derive an upper bound on the fraction of a shaded fragment that is occluded. We show that this bound often provides a good approximation to the true occlusion, and can be used as an approximate value for rendering. Our algorithm is simple to implement on current graphics processors and solves the problem of shadow map aliasing with minimal additional storage and computation.

Proceedings ArticleDOI
22 Jan 2006
TL;DR: This paper provides an almost tight classification of the possible trade-off between the amount of local information and the quality of the global solution for general covering and packing problems and gives a distributed algorithm using only small messages which obtains an (ρΔ)1/k-approximation in time O(k2).
Abstract: Achieving a global goal based on local information is challenging, especially in complex and large-scale networks such as the Internet or even the human brain. In this paper, we provide an almost tight classification of the possible trade-off between the amount of local information and the quality of the global solution for general covering and packing problems. Specifically, we give a distributed algorithm using only small messages which obtains an (ρΔ)1/k-approximation for general covering and packing problems in time O(k2), where ρ depends on the LP's coefficients. If message size is unbounded, we present a second algorithm that achieves an O(n1/k) approximation in O(k) rounds. Finally, we prove that these algorithms are close to optimal by giving a lower bound on the approximability of packing problems given that each node has to base its decision on information from its k-neighborhood.

Journal ArticleDOI
TL;DR: In this article, the authors determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero, where the lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear.
Abstract: We determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero. The lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear. A calibration to the U.S. economy suggests that policy should reduce nominal interest rates more aggressively than suggested by a model without lower bound. Rational agents anticipate the possibility of reaching the lower bound in the future and this amplifies the effects of adverse shocks well before the bound is reached. While the empirical magnitude of U.S. mark-up shocks seems too small to entail zero nominal interest rates, shocks affecting the natural real interest rate plausibly lead to a binding lower bound. Under optimal policy, however, this occurs quite infrequently and does not imply positive average inflation rates in equilibrium. Interestingly, the presence of binding real rate shocks alters the policy response to (non-binding) markup shocks.

Journal ArticleDOI
TL;DR: This paper quantifies the 'sufficient sparsity' condition, defining an equivalence breakdown point (EBP), and describes a semi-empirical heuristic for predicting the local EBP at this ensemble of 'typical' matrices with unit norm columns.

Journal ArticleDOI
TL;DR: For subsets A of the finite field Fp, p prime, a lower bound of ∆ x 1,...,xk∈A exp(2πix1... xkξ/p) where A ⊂ Fp was shown in this article.
Abstract: Our first result is a ‘sum-product’ theorem for subsets A of the finite field Fp, p prime, providing a lower bound on max(|A + A|, |A · A|). The second and main result provides new bounds on exponential sums ∑ x1,...,xk∈A exp(2πix1 . . . xkξ/p), where A ⊂ Fp.

Journal ArticleDOI
TL;DR: In this article, a general theorem providing upper bounds for the risk of an empirical risk minimizer (ERM) under margin type conditions was proposed, where the classification rules belong to some VC-class under margin conditions.
Abstract: We propose a general theorem providing upper bounds for the risk of an empirical risk minimizer (ERM).We essentially focus on the binary classification framework. We extend Tsybakov’s analysis of the risk of an ERM under margin type conditions by using concentration inequalities for conveniently weighted empirical processes. This allows us to deal with ways of measuring the “size” of a class of classifiers other than entropy with bracketing as in Tsybakov’s work. In particular, we derive new risk bounds for the ERM when the classification rules belong to some VC-class under margin conditions and discuss the optimality of these bounds in a minimax sense.

Journal ArticleDOI
TL;DR: In this paper, the abundance of the lightest (dark matter) sterile neutrinos created in the Early Universe due to active-sterile neutrino transitions from the thermal plasma was determined.
Abstract: We determine the abundance of the lightest (dark matter) sterile neutrinos created in the Early Universe due to active-sterile neutrino transitions from the thermal plasma. Our starting point is the field-theoretic formula for the sterile neutrino production rate, derived in our previous work [JHEP 06(2006)053], which allows to systematically incorporate all relevant effects, and also to analyse various hadronic uncertainties. Our numerical results differ moderately from previous computations in the literature, and lead to an absolute upper bound on the mixing angles of the dark matter sterile neutrino. Comparing this bound with existing astrophysical X-ray constraints, we find that the Dodelson-Widrow scenario, which proposes sterile neutrinos generated by active-sterile neutrino transitions to be the sole source of dark matter, is only possible for sterile neutrino masses lighter than 3.5 keV (6 keV if all hadronic uncertainties are pushed in one direction and the most stringent X-ray bounds are relaxed by a factor of two). This upper bound may conflict with a lower bound from structure formation, but a definitive conclusion necessitates numerical simulations with the non-equilibrium momentum distribution function that we derive. If other production mechanisms are also operative, no upper bound on the sterile neutrino mass can be established.