scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 2011"


Journal ArticleDOI
TL;DR: This paper suggests the lower bound lemma for such a combination, which achieves performance behavior identical to approaches based on the integral inequality lemma but with much less decision variables, comparable to thosebased on the Jensen inequalityLemma.

2,248 citations


Proceedings Article
01 Dec 2011
TL;DR: An O (√ Td ln (KT ln(T )/δ) ) regret bound is proved that holds with probability 1− δ for the simplest known upper confidence bound algorithm for this problem.
Abstract: In this paper we study the contextual bandit problem (also known as the multi-armed bandit problem with expert advice) for linear payoff functions. For T rounds, K actions, and d dimensional feature vectors, we prove an O (√ Td ln(KT ln(T )/δ) ) regret bound that holds with probability 1− δ for the simplest known (both conceptually and computationally) efficient upper confidence bound algorithm for this problem. We also prove a lower bound of Ω( √ Td) for this setting, matching the upper bound up to logarithmic factors.

835 citations


Journal ArticleDOI
TL;DR: A measure on graphs, the minrank, is identified, which exactly characterizes the minimum length of linear and certain types of nonlinear INDEX codes and for natural classes of side information graphs, including directed acyclic graphs, perfect graphs, odd holes, and odd anti-holes, minrank is the optimal length of arbitrary INDex codes.
Abstract: Motivated by a problem of transmitting supplemental data over broadcast channels (Birk and Kol, INFOCOM 1998), we study the following coding problem: a sender communicates with n receivers R1,..., Rn. He holds an input x ∈ {0,01l}n and wishes to broadcast a single message so that each receiver Ri can recover the bit xi. Each Ri has prior side information about x, induced by a directed graph Grain nodes; Ri knows the bits of a; in the positions {j | (i,j) is an edge of G}.G is known to the sender and to the receivers. We call encoding schemes that achieve this goal INDEXcodes for {0,1}n with side information graph G. In this paper we identify a measure on graphs, the minrank, which exactly characterizes the minimum length of linear and certain types of nonlinear INDEX codes. We show that for natural classes of side information graphs, including directed acyclic graphs, perfect graphs, odd holes, and odd anti-holes, minrank is the optimal length of arbitrary INDEX codes. For arbitrary INDEX codes and arbitrary graphs, we obtain a lower bound in terms of the size of the maximum acyclic induced subgraph. This bound holds even for randomized codes, but has been shown not to be tight.

632 citations


Journal ArticleDOI
TL;DR: In this article, a new nuclear-norm penalized estimator of A0 was proposed and established a general sharp oracle inequality for this estimator for arbitrary values of n, m1, m2 under the condition of isometry in expectation.
Abstract: This paper deals with the trace regression model where n entries or linear combinations of entries of an unknown m1 × m2 matrix A0 corrupted by noise are observed. We propose a new nuclear-norm penalized estimator of A0 and establish a general sharp oracle inequality for this estimator for arbitrary values of n, m1, m2 under the condition of isometry in expectation. Then this method is applied to the matrix completion problem. In this case, the estimator admits a simple explicit form, and we prove that it satisfies oracle inequalities with faster rates of convergence than in the previous works. They are valid, in particular, in the high-dimensional setting m1m2 ≫ n. We show that the obtained rates are optimal up to logarithmic factors in a minimax sense and also derive, for any fixed matrix A0, a nonminimax lower bound on the rate of convergence of our estimator, which coincides with the upper bound up to a constant factor. Finally, we show that our procedure provides an exact recovery of the rank of A0 with probability close to 1. We also discuss the statistical learning setting where there is no underlying model determined by A0, and the aim is to find the best trace regression model approximating the data. As a by-product, we show that, under the restricted eigenvalue condition, the usual vector Lasso estimator satisfies a sharp oracle inequality (i.e., an oracle inequality with leading constant 1).

613 citations


Journal ArticleDOI
TL;DR: A new, fast, yet reliable method for the construction of PIs for NN predictions, and the quantitative comparison with three traditional techniques for prediction interval construction reveals that the LUBE method is simpler, faster, and more reliable.
Abstract: Prediction intervals (PIs) have been proposed in the literature to provide more information by quantifying the level of uncertainty associated to the point forecasts. Traditional methods for construction of neural network (NN) based PIs suffer from restrictive assumptions about data distribution and massive computational loads. In this paper, we propose a new, fast, yet reliable method for the construction of PIs for NN predictions. The proposed lower upper bound estimation (LUBE) method constructs an NN with two outputs for estimating the prediction interval bounds. NN training is achieved through the minimization of a proposed PI-based objective function, which covers both interval width and coverage probability. The method does not require any information about the upper and lower bounds of PIs for training the NN. The simulated annealing method is applied for minimization of the cost function and adjustment of NN parameters. The demonstrated results for 10 benchmark regression case studies clearly show the LUBE method to be capable of generating high-quality PIs in a short time. Also, the quantitative comparison with three traditional techniques for prediction interval construction reveals that the LUBE method is simpler, faster, and more reliable.

533 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the minimax rates of convergence for estimating β* in either l 2-loss and l2-prediction loss, assuming that β* belongs to an l q -ball \BBBq(Rq) for some q ∈ [0, 1].
Abstract: Consider the high-dimensional linear regression model y = X β* + w, where y ∈ \BBRn is an observation vector, X ∈ \BBRn × d is a design matrix with d >; n, β* ∈ \BBRd is an unknown regression vector, and w ~ N(0, σ2I) is additive Gaussian noise. This paper studies the minimax rates of convergence for estimating β* in either l2-loss and l2-prediction loss, assuming that β* belongs to an lq -ball \BBBq(Rq) for some q ∈ [0,1]. It is shown that under suitable regularity conditions on the design matrix X, the minimax optimal rate in l2-loss and l2-prediction loss scales as Θ(Rq ([(logd)/(n)])1-q/2). The analysis in this paper reveals that conditions on the design matrix X enter into the rates for l2-error and l2-prediction error in complementary ways in the upper and lower bounds. Our proofs of the lower bounds are information theoretic in nature, based on Fano's inequality and results on the metric entropy of the balls \BBBq(Rq), whereas our proofs of the upper bounds are constructive, involving direct analysis of least squares over lq-balls. For the special case q=0, corresponding to models with an exact sparsity constraint, our results show that although computationally efficient l1-based methods can achieve the minimax rates up to constant factors, they require slightly stronger assumptions on the design matrix X than optimal algorithms involving least-squares over the l0-ball.

425 citations


Journal ArticleDOI
TL;DR: An explicit matrix product ansatz is presented, in the first two orders in the (weak) coupling parameter, for the nonequilibrium steady state of the homogeneous, nearest neighbor Heisenberg XXZ spin 1/2 chain driven by Lindblad operators which act only at the edges of the chain.
Abstract: An explicit matrix product ansatz is presented, in the first two orders in the (weak) coupling parameter, for the nonequilibrium steady state of the homogeneous, nearest neighbor Heisenberg XXZ spin 1/2 chain driven by Lindblad operators which act only at the edges of the chain The first order of the density operator becomes, in the thermodynamic limit, an exact pseudolocal conservation law and yields-via the Mazur inequality-a rigorous lower bound on the high-temperature spin Drude weight Such a Mazur bound is a nonvanishing fractal function of the anisotropy parameter Δ for |Δ|<1

417 citations


Journal ArticleDOI
TL;DR: The fundamental tradeoff between energy efficiency (EE) and SE in downlink orthogonal frequency division multiple access (OFDMA) networks is addressed and a low-complexity but near-optimal resource allocation algorithm is developed for practical application of the EE-SE tradeoff.
Abstract: Conventional design of wireless networks mainly focuses on system capacity and spectral efficiency (SE). As green radio (GR) becomes an inevitable trend, energy-efficient design is becoming more and more important. In this paper, the fundamental tradeoff between energy efficiency (EE) and SE in downlink orthogonal frequency division multiple access (OFDMA) networks is addressed. We first set up a general EE-SE tradeoff framework, where the overall EE, SE and per-user quality-of-service (QoS) are all considered, and prove that under this framework, EE is strictly quasiconcave in SE. We then discuss some basic properties, such as the impact of channel power gain and circuit power on the EE-SE relation. We also find a tight upper bound and a tight lower bound on the EE-SE curve for general scenarios, which reflect the actual EE-SE relation. We then focus on a special case that priority and fairness are considered and suggest an alternative upper bound, which is proved to be achievable for flat fading channels. We also develop a low-complexity but near-optimal resource allocation algorithm for practical application of the EE-SE tradeoff. Numerical results confirm the theoretical findings and demonstrate the effectiveness of the proposed resource allocation scheme for achieving a flexible and desirable tradeoff between EE and SE.

379 citations


Journal ArticleDOI
01 Nov 2011
TL;DR: This paper derives tight upper and lower bounds on the achievable sum-rate, and proposes a transmission scheme based on maximization of the lower bound, which requires us to (numerically) solve a nonconvex optimization problem.
Abstract: In this paper, we consider the problem of full-duplex bidirectional communication between a pair of modems, each with multiple transmit and receive antennas The principal difficulty in implementing such a system is that, due to the close proximity of each modem's transmit antennas to its receive antennas, each modem's outgoing signal can exceed the dynamic range of its input circuitry, making it difficult-if not impossible-to recover the desired incoming signal To address these challenges, we consider systems that use pilot-aided channel estimates to perform transmit beamforming, receive beamforming, and interference cancellation Modeling transmitter/receiver dynamic-range limitations explicitly, we derive tight upper and lower bounds on the achievable sum-rate, and propose a transmission scheme based on maximization of the lower bound, which requires us to (numerically) solve a nonconvex optimization problem In addition, we derive an analytic approximation to the achievable sum-rate, and show, numerically, that it is quite accurate We then study the behavior of the sum-rate as a function of signal-to-noise ratio, interference-to-noise ratio, transmitter/receiver dynamic range, number of antennas, and training length, using optimized half-duplex signaling as a baseline

376 citations


Proceedings ArticleDOI
06 Jun 2011
TL;DR: A new approach to characterizing the unobserved portion of a distribution is introduced, which provides sublinear--sample estimators achieving arbitrarily small additive constant error for a class of properties that includes entropy and distribution support size.
Abstract: We introduce a new approach to characterizing the unobserved portion of a distribution, which provides sublinear--sample estimators achieving arbitrarily small additive constant error for a class of properties that includes entropy and distribution support size. Additionally, we show new matching lower bounds. Together, this settles the longstanding question of the sample complexities of these estimation problems, up to constant factors. Our algorithm estimates these properties up to an arbitrarily small additive constant, using O(n/log n) samples, where n is a bound on the support size, or in the case of estimating the support size, 1/n is a lower bound on the probability of any element of the domain. Previously, no explicit sublinear--sample algorithms for either of these problems were known. Our algorithm is also computationally extremely efficient, running in time linear in the number of samples used.In the second half of the paper, we provide a matching lower bound of Ω(n/log n) samples for estimating entropy or distribution support size to within an additive constant. The previous lower-bounds on these sample complexities were n/2O(√log n).To show our lower bound, we prove two new and natural multivariate central limit theorems (CLTs); the first uses Stein's method to relate the sum of independent distributions to the multivariate Gaussian of corresponding mean and covariance, under the earthmover distance metric (also known as the Wasserstein metric). We leverage this central limit theorem to prove a stronger but more specific central limit theorem for "generalized multinomial" distributions---a large class of discrete distributions, parameterized by matrices, that represents sums of independent binomial or multinomial distributions, and describes many distributions encountered in computer science. Convergence here is in the strong sense of statistical distance, which immediately implies that any algorithm with input drawn from a generalized multinomial distribution behaves essentially as if the input were drawn from a discretized Gaussian with the same mean and covariance. Such tools in the multivariate setting are rare, and we hope this new tool will be of use to the community.

334 citations


Journal ArticleDOI
TL;DR: This work investigates penalized least squares estimators with a Schatten-p quasi-norm penalty term and derives bounds for the kth entropy numbers of the quasi-convex Schatten class embeddings S M p → S M 2 , p < 1, which are of independent interest.
Abstract: Suppose that we observe entries or, more generally, linear combinations of entries of an unknown m x T-matrix A corrupted by noise. We are particularly interested in the high-dimensional setting where the number mT of unknown entries can be much larger than the sample size N. Motivated by several applications, we consider estimation of matrix A under the assumption that it has small rank. This can be viewed as dimension reduction or sparsity assumption. In order to shrink toward a low-rank representation, we investigate penalized least squares estimators with a Schatten-p quasi-norm penalty term, p ≤ 1. We study these estimators under two possible assumptions—a modified version of the restricted isometry condition and a uniform bound on the ratio "empirical norm induced by the sampling operator/Frobenius norm." The main results are stated as nonasymptotic upper bounds on the prediction risk and on the Schatten-q risk of the estimators, where q ∈ [p, 2]. The rates that we obtain for the prediction risk are of the form rm/N (for m = T), up to logarithmic factors, where r is the rank of A. The particular examples of multi-task learning and matrix completion are worked out in detail. The proofs are based on tools from the theory of empirical processes. As a by-product, we derive bounds for the kth entropy numbers of the quasi-convex Schatten class embeddings S M p → S M 2 , p < 1, which are of independent interest.

Journal ArticleDOI
TL;DR: In this paper, the authors derived general bounds on operator dimensions, central charges, and OPE coefficients in 4D conformal and $ \mathcal{N} = 1 $姫 superconformal field theories.
Abstract: We derive general bounds on operator dimensions, central charges, and OPE coefficients in 4D conformal and $ \mathcal{N} = 1 $ superconformal field theories. In any CFT containing a scalar primary ϕ of dimension d we show that crossing symmetry of $ \left\langle {\phi \phi \phi \phi } \right\rangle $ implies a completely general lower bound on the central charge c ≥ f c (d). Similarly, in CFTs containing a complex scalar charged under global symmetries, we bound a combination of symmetry current two-point function coefficients τ IJ and flavor charges. We extend these bounds to $ \mathcal{N} = 1 $ superconformal theories by deriving the superconformal block expansions for four-point functions of a chiral superfield Φ and its conjugate. In this case we derive bounds on the OPE coefficients of scalar operators appearing in the Φ × Φ† OPE, and show that there is an upper bound on the dimension of Φ†Φ when dim Φ is close to 1. We also present even more stringent bounds on c and τ IJ . In supersymmetric gauge theories believed to flow to superconformal fixed points one can use anomaly matching to explicitly check whether these bounds are satisfied.

Proceedings ArticleDOI
05 Jun 2011
TL;DR: The fundamental tradeoff between energy efficiency (EE) and SE in downlink orthogonal frequency division multiple access (OFDMA) networks is addressed and a low-complexity but near-optimal resource allocation algorithm is developed for practical application of the EE-SE tradeoff.
Abstract: Conventional design of wireless networks mainly focuses on system capacity and spectral efficiency (SE). As green radio (GR) becomes an inevitable trend, energy-efficient design in wireless networks is becoming more and more important. In this paper, the fundamental tradeoff relation between energy efficiency (EE) and SE in downlink orthogonal frequency division multiple access (OFDMA) networks is addressed. We obtain a tight upper bound and lower bound on the optimal EE-SE tradeoff relation for general scenarios based on Lagrange dual decomposition, which accurately reflects the optimal EE-SE tradeoff relation. We then focus on a special case that priority and fairness are considered and derive an alternative upper bound, which is even proved to be achievable for flat fading channels. We also develop a low-complexity but near-optimal resource allocation algorithm for practical application of EE-SE tradeoff. Numerical results demonstrate that the optimal EE-SE tradeoff relation is a bell shape curve and can be well approached with our resource allocation algorithm.

Journal ArticleDOI
TL;DR: Under certain conditions, several stability results are established by constructing a sequence of functions that are positive, monotonically decreasing, and convergent to zero as time tends to infinity (additionally continuous for continuous-time systems).
Abstract: This technical note addresses the stability problem of delayed positive switched linear systems whose subsystems are all positive. Both discrete-time systems and continuous-time systems are studied. In our analysis, the delays in systems can be unbounded. Under certain conditions, several stability results are established by constructing a sequence of functions that are positive, monotonically decreasing, and convergent to zero as time tends to infinity (additionally continuous for continuous-time systems). It turns out that these functions can serve as an upper bound of the systems' trajectories starting from a particular region. Finally, a numerical example is presented to illustrate the obtained results.

Journal ArticleDOI
TL;DR: An exponential upper bound is derived for Eve's information in secret key generation from a common random number without communication based on the Rényi entropy of order 2 and is applied to secret key agreement by public discussion.
Abstract: We derive a new upper bound for Eve's information in secret key generation from a common random number without communication. This bound improves on Bennett 's bound based on the Renyi entropy of order 2 because the bound obtained here uses the Renyi entropy of order 1+s for s ∈ [0,1]. This bound is applied to a wire-tap channel. Then, we derive an exponential upper bound for Eve's information. Our exponent is compared with Hayashi 's exponent. For the additive case, the bound obtained here is better. The result is applied to secret key agreement by public discussion.

Proceedings ArticleDOI
06 Jun 2011
TL;DR: This paper studies the ranking algorithm in the random arrivals model, and shows that it has a competitive ratio of at least 0.696, beating the 1-1/e ≈ 0.632 barrier in the adversarial model.
Abstract: In a seminal paper, Karp, Vazirani, and Vazirani show that a simple ranking algorithm achieves a competitive ratio of 1-1/e for the online bipartite matching problem in the standard adversarial model, where the ratio of 1-1/e is also shown to be optimal. Their result also implies that in the random arrivals model defined by Goel and Mehta, where the online nodes arrive in a random order, a simple greedy algorithm achieves a competitive ratio of 1-1/e. In this paper, we study the ranking algorithm in the random arrivals model, and show that it has a competitive ratio of at least 0.696, beating the 1-1/e ≈ 0.632 barrier in the adversarial model. Our result also extends to the i.i.d. distribution model of Feldman et al., removing the assumption that the distribution is known.Our analysis has two main steps. First, we exploit certain dominance and monotonicity properties of the ranking algorithm to derive a family of factor-revealing linear programs (LPs). In particular, by symmetry of the ranking algorithm in the random arrivals model, we have the monotonicity property on both sides of the bipartite graph, giving good "strength" to the LPs. Second, to obtain a good lower bound on the optimal values of all these LPs and hence on the competitive ratio of the algorithm, we introduce the technique of strongly factor-revealing LPs. In particular, we derive a family of modified LPs with similar strength such that the optimal value of any single one of these new LPs is a lower bound on the competitive ratio of the algorithm. This enables us to leverage the power of computer LP solvers to solve for large instances of the new LPs to establish bounds that would otherwise be difficult to attain by human analysis.

Journal ArticleDOI
TL;DR: This paper studies the finite-horizon optimal control problem for discrete-time nonlinear systems using the adaptive dynamic programming (ADP) approach and uses an iterative ADP algorithm to obtain the optimal control law.
Abstract: In this paper, we study the finite-horizon optimal control problem for discrete-time nonlinear systems using the adaptive dynamic programming (ADP) approach. The idea is to use an iterative ADP algorithm to obtain the optimal control law which makes the performance index function close to the greatest lower bound of all performance indices within an -error bound. The optimal number of control steps can also be obtained by the proposed ADP algorithms. A convergence analysis of the proposed ADP algorithms in terms of performance index function and control policy is made. In order to facilitate the implementation of the iterative ADP algorithms, neural networks are used for approximating the performance index function, computing the optimal control policy, and modeling the nonlinear system. Finally, two simulation examples are employed to illustrate the applicability of the proposed method.

Posted Content
TL;DR: In this article, the KL-UCB algorithm was shown to have a uniformly better regret bound than UCB or UCB2 and reached the lower bound of Lai and Robbins for Bernoulli rewards.
Abstract: This paper presents a finite-time analysis of the KL-UCB algorithm, an online, horizon-free index policy for stochastic bandit problems. We prove two distinct results: first, for arbitrary bounded rewards, the KL-UCB algorithm satisfies a uniformly better regret bound than UCB or UCB2; second, in the special case of Bernoulli rewards, it reaches the lower bound of Lai and Robbins. Furthermore, we show that simple adaptations of the KL-UCB algorithm are also optimal for specific classes of (possibly unbounded) rewards, including those generated from exponential families of distributions. A large-scale numerical study comparing KL-UCB with its main competitors (UCB, UCB2, UCB-Tuned, UCB-V, DMED) shows that KL-UCB is remarkably efficient and stable, including for short time horizons. KL-UCB is also the only method that always performs better than the basic UCB policy. Our regret bounds rely on deviations results of independent interest which are stated and proved in the Appendix. As a by-product, we also obtain an improved regret bound for the standard UCB algorithm.

Proceedings Article
21 Dec 2011
TL;DR: In this article, the KL-UCB algorithm was shown to have a uniformly better regret bound than UCB and its variants, and reached the lower bound of Lai and Robbins.
Abstract: This paper presents a nite-time analysis of the KL-UCB algorithm, an online, horizonfree index policy for stochastic bandit problems. We prove two distinct results: rst, for arbitrary bounded rewards, the KL-UCB algorithm satises a uniformly better regret bound than UCB and its variants; second, in the special case of Bernoulli rewards, it reaches the lower bound of Lai and Robbins. Furthermore, we show that simple adaptations of the KL-UCB algorithm are also optimal for specic classes of (possibly unbounded) rewards, including those generated from exponential families of distributions. A large-scale numerical study comparing KL-UCB with its main competitors (UCB, MOSS, UCB-Tuned, UCB-V, DMED) shows that KL-UCB is remarkably ecient and stable, including for short time horizons. KL-UCB is also the only method that always performs better than the basic UCB policy. Our regret bounds rely on deviations results of independent interest which are stated and proved in the Appendix. As a by-product, we also obtain an improved regret bound for the standard UCB algorithm.

Journal ArticleDOI
TL;DR: A new simple Lyapunov-Krasovskii functional (LKF) approach without delay decomposition is proposed including the cross terms of variables and quadratic terms multiplied by a higher degree scalar function, and a new result expressed in the form of LMIs is presented.

Journal ArticleDOI
TL;DR: An efficient distributed algorithm is proposed that produces a collision-free schedule for data aggregation in WSNs and it is theoretically proved that the delay of the aggregation schedule generated by the algorithm is at most 16R + Δ - 14 time slots.
Abstract: Data aggregation is a key functionality in wireless sensor networks (WSNs). This paper focuses on data aggregation scheduling problem to minimize the delay (or latency). We propose an efficient distributed algorithm that produces a collision-free schedule for data aggregation in WSNs. We theoretically prove that the delay of the aggregation schedule generated by our algorithm is at most 16R + Δ - 14 time slots. Here, R is the network radius and Δ is the maximum node degree in the communication graph of the original network. Our algorithm significantly improves the previously known best data aggregation algorithm with an upper bound of delay of 24D + 6Δ + 16 time slots, where D is the network diameter (note that D can be as large as 2R). We conduct extensive simulations to study the practical performances of our proposed data aggregation algorithm. Our simulation results corroborate our theoretical results and show that our algorithms perform better in practice. We prove that the overall lower bound of delay for data aggregation under any interference model is max{log n,R}, where n is the network size. We provide an example to show that the lower bound is (approximately) tight under the protocol interference model when rI = r, where rI is the interference range and r is the transmission range. We also derive the lower bound of delay under the protocol interference model when r <; rI <; 3r and rI ≥ 3r.

Journal ArticleDOI
TL;DR: It is proved that the performance of the improved LLS estimator achieves Cramer-Rao lower bound at sufficiently small noise conditions and the variances of the position estimates are derived and confirmed by computer simulations.
Abstract: A conventional approach for passive source localization is to utilize signal strength measurements of the emitted source received at an array of spatially separated sensors. The received signal strength (RSS) information can be converted to distance estimates for constructing a set of circular equations, from which the target position is determined. Nevertheless, a major challenge in this approach lies in the shadow fading effect which corresponds to multiplicative measurement errors. By utilizing the mean and variance of the squared distance estimates, we devise two linear least squares (LLS) estimators for RSS-based positioning in this paper. The first one is a best linear unbiased estimator while the second is its improved version by exploiting the known relation between the parameter estimates. The variances of the position estimates are derived and confirmed by computer simulations. In particular, it is proved that the performance of the improved LLS estimator achieves Cramer-Rao lower bound at sufficiently small noise conditions.

Journal ArticleDOI
TL;DR: The objective of these papers is to provide a framework for understanding how this growth in energy consumption can be managed, and to explore the fundamental limits on energy consumption in optical communication systems and networks.
Abstract: The capacity and geographical coverage of the global communications network continue to expand. One consequence of this expansion is a steady growth in the overall energy consumption of the network. This is the first of two papers that explore the fundamental limits on energy consumption in optical communication systems and networks. The objective of these papers is to provide a framework for understanding how this growth in energy consumption can be managed. This paper (Part I) focuses on the energy consumption in optically amplified transport systems. The accompanying paper (Part II) focuses on energy consumption in networks. A key focus of both papers is an analysis of the lower bound on energy consumption. This lower bound gives an indication of the best possible energy efficiency that could ever be achieved. The lower bound on energy in transport systems is limited by the energy consumption in optical amplifiers, and in optical transmitters and receivers. The performance of an optical transport system is ultimately set by the Shannon bound on receiver sensitivity, and depends on factors such as the modulation format, fiber losses, system length, and the spontaneous noise in optical amplifiers. Collectively, these set a lower bound on the number of amplifiers required, and hence, the amplifier energy consumption. It is possible to minimize the total energy consumption of an optically amplified system by locating repeaters strategically. The lower bound on energy consumption in optical transmitters and receivers is limited by device and circuit factors. In commercial optical transport systems, the energy consumption is at least two orders of magnitude larger than the ideal lower bounds described here. The difference between the ideal lower bounds and the actual energy consumption in commercial systems is due to inefficiencies and energy overheads. A key strategy in reducing the energy consumption of optical transport systems will be to reduce these inefficiencies and overheads.

Journal ArticleDOI
TL;DR: In this article, a general analysis of crossing symmetry constraints in 4D conformal field theory with a continuous global symmetry group is given, where phi is a primary scalar operator in a given representation R and the coefficients in these sum rules are related to the Fierz transformation matrices for the R circle times R over bar circle times (R) over bar invariant tensors.
Abstract: We explore the constraining power of OPE associativity in 4D conformal field theory with a continuous global symmetry group. We give a general analysis of crossing symmetry constraints in the 4-point function , where phi is a primary scalar operator in a given representation R. These constraints take the form of 'vectorial sum rules' for conformal blocks of operators whose representations appear in R circle times R and R circle times (R) over bar. The coefficients in these sum rules are related to the Fierz transformation matrices for the R circle times R circle times (R) over bar circle times (R) over bar invariant tensors. We show that the number of equations is always equal to the number of symmetry channels to be constrained. We also analyze in detail two cases-the fundamental of SO(N) and the fundamental of SU(N). We derive the vectorial sum rules explicitly, and use them to study the dimension of the lowest singlet scalar in the phi x phi(dagger) OPE. We prove the existence of an upper bound on the dimension of this scalar. The bound depends on the conformal dimension of phi and approaches 2 in the limit dim(phi) -> 1. For several small groups, we compute the behavior of the bound at dim(phi) > 1. We discuss implications of our bound for the conformal technicolor scenario of electroweak symmetry breaking.

Journal ArticleDOI
TL;DR: In this paper, a blow-up criterion in terms of the upper bound of the density for the strong solution to the 3-D compressible Navier-Stokes equations is presented.

Book
18 Sep 2011
TL;DR: In this article, the authors consider the problem of estimating the residual lifetime distribution and its mean for a set of classes of distributions and derive bounds on the ratio of discrete tail probabilities.
Abstract: 1 Introduction.- 2 Reliability background.- 2.1 The failure rate.- 2.2 Equilibrium distributions.- 2.3 The residual lifetime distribution and its mean.- 2.4 Other classes of distributions.- 2.5 Discrete reliability classes.- 2.6 Bounds on ratios of discrete tail probabilities.- 3 Mixed Poisson distributions.- 3.1 Tails of mixed Poisson distributions.- 3.2 The radius of convergence.- 3.3 Bounds on ratios of tail probabilities.- 3.4 Asymptotic tail behaviour of mixed Poisson distributions.- 4 Compound distributions.- 4.1 Introduction and examples.- 4.2 The general upper bound.- 4.3 The general lower bound.- 4.4 A Wald-type martingale approach.- 5 Bounds based on reliability classifications.- 5.1 First order properties.- 5.2 Bounds based on equilibrium properties.- 6 Parametric Bounds.- 6.1 Exponential bounds.- 6.2 Pareto bounds.- 6.3 Product based bounds.- 7 Compound geometric and related distributions.- 7.1 Compound modified geometric distributions.- 7.2 Discrete compound geometric distributions.- 7.3 Application to ruin probabilities.- 7.4 Compound negative binomial distributions.- 8 Tijms approximations.- 8.1 The asymptotic geometric case.- 8.2 The modified geometric distribution.- 8.3 Transform derivation of the approximation.- 9 Defective renewal equations.- 9.1 Some properties of defective renewal equations.- 9.2 The time of ruin and related quantities.- 9.3 Convolutions involving compound geometric distributions.- 10 The severity of ruin.- 10.1 The associated defective renewal equation.- 10.2 A mixture representation for the conditional distribution.- 10.3 Erlang mixtures with the same scale parameter.- 10.4 General Erlang mixtures.- 10.5 Further results.- 11 Renewal risk processes.- 11.1 General properties of the model.- 11.2 The Coxian-2 case.- 11.3 The sum of two exponentials.- 11.4 Delayed and equilibrium renewal risk processes.- Symbol Index.- Author Index.

Journal ArticleDOI
TL;DR: This work considers the probabilistic numerical scheme for fully nonlinear PDEs suggested in cstv, and shows that it can be introduced naturally as a combination of Monte Carlo and finite differences scheme without appealing to the theory of backward stochastic differential equations.
Abstract: We consider the probabilistic numerical scheme for fully nonlinear PDEs suggested in \cite{cstv}, and show that it can be introduced naturally as a combination of Monte Carlo and finite differences scheme without appealing to the theory of backward stochastic differential equations. Our first main result provides the convergence of the discrete-time approximation and derives a bound on the discretization error in terms of the time step. An explicit implementable scheme requires to approximate the conditional expectation operators involved in the discretization. This induces a further Monte Carlo error. Our second main result is to prove the convergence of the latter approximation scheme, and to derive an upper bound on the approximation error. Numerical experiments are performed for the approximation of the solution of the mean curvature flow equation in dimensions two and three, and for two and five-dimensional (plus time) fully-nonlinear Hamilton-Jacobi-Bellman equations a! rising in the theory of portfolio optimization in financial mathematics.

Journal ArticleDOI
TL;DR: A lower bound on the maximum-likelihood (ML) bit error performance using the local neighborhood search is obtained using the proposed low-complexity algorithm for large-MIMO detection based on a layered low- complexity localNeighborhood search.
Abstract: In this letter, we are concerned with low-complexity detection in large multiple-input multiple-output (MIMO) systems with tens of transmit/receive antennas. Our new contributions in this letter are two-fold. First, we propose a low-complexity algorithm for large-MIMO detection based on a layered low-complexity local neighborhood search. Second, we obtain a lower bound on the maximum-likelihood (ML) bit error performance using the local neighborhood search. The advantages of the proposed ML lower bound are i) it is easily obtained for MIMO systems with large number of antennas because of the inherent low complexity of the search algorithm, ii) it is tight at moderate-to-high SNRs, and iii) it can be tightened at low SNRs by increasing the number of symbols in the neighborhood definition. The proposed detection algorithm based on the layered local neighborhood search achieves bit error performances which are quite close to this lower bound for large number of antennas and higher-order QAM.

Journal ArticleDOI
TL;DR: This paper presents a new exact algorithm for the PDPTW based on a set-partitioning--like integer formulation, and describes a bounding procedure that finds a near-optimal dual solution of the LP-relaxation of the formulation by combining two dual ascent heuristics and a cut-and-column generation procedure.
Abstract: The pickup and delivery problem with time windows (PDPTW) is a generalization of the vehicle routing problem with time windows. In the PDPTW, a set of identical vehicles located at a central depot must be optimally routed to service a set of transportation requests subject to capacity, time window, pairing, and precedence constraints. In this paper, we present a new exact algorithm for the PDPTW based on a set-partitioning--like integer formulation, and we describe a bounding procedure that finds a near-optimal dual solution of the LP-relaxation of the formulation by combining two dual ascent heuristics and a cut-and-column generation procedure. The final dual solution is used to generate a reduced problem containing only the routes whose reduced costs are smaller than the gap between a known upper bound and the lower bound achieved. If the resulting problem has moderate size, it is solved by an integer programming solver; otherwise, a branch-and-cut-and-price algorithm is used to close the integrality gap. Extensive computational results over the main instances from the literature show the effectiveness of the proposed exact method.

Journal ArticleDOI
TL;DR: It is proved here that the Jensen's gap can be made arbitrarily small provided that the order of uniform fragmentation is chosen sufficiently large.
Abstract: The Jensen's inequality plays a crucial role in the analysis of time-delay and sampled-data systems. Its conservatism is studied through the use of the Gruss Inequality. It has been reported in the literature that fragmentation (or partitioning) schemes allow to empirically improve the results. We prove here that the Jensen's gap can be made arbitrarily small provided that the order of uniform fragmentation is chosen sufficiently large. Nonuniform fragmentation schemes are also shown to speed up the convergence in certain cases. Finally, a family of bounds is characterized and a comparison with other bounds of the literature is provided. It is shown that the other bounds are equivalent to Jensen's and that they exhibit interesting well-posedness and linearity properties which can be exploited to obtain better numerical results.