scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 2008"


Journal ArticleDOI
TL;DR: The empirical upper bound relationship for membrane separation of gases initially published in 1991 has been reviewed with the myriad of data now presently available as mentioned in this paper, which indicates a different solubility selectivity relationship for perfluorinated polymers compared to hydrocarbon/aromatic polymers.

4,525 citations


Proceedings Article
01 Jan 2008
TL;DR: A nearly complete characterization of the classical stochastic k-armed bandit problem in terms of both upper and lower bounds for the regret is given, and two variants of an algorithm based on the idea of “upper confidence bounds” are presented.
Abstract: In the classical stochastic k-armed bandit problem, in each of a sequence of T rounds, a decision maker chooses one of k arms and incurs a cost chosen from an unknown distribution associated with that arm. The goal is to minimize regret, defined as the difference between the cost incurred by the algorithm and the optimal cost. In the linear optimization version of this problem (first considered by Auer [2002]), we view the arms as vectors in R, and require that the costs be linear functions of the chosen vector. As before, it is assumed that the cost functions are sampled independently from an unknown distribution. In this setting, the goal is to find algorithms whose running time and regret behave well as functions of the number of rounds T and the dimensionality n (rather than the number of arms, k, which may be exponential in n or even infinite). We give a nearly complete characterization of this problem in terms of both upper and lower bounds for the regret. In certain special cases (such as when the decision region is a polytope), the regret is polylog(T ). In general though, the optimal regret is Θ∗( √ T ) — our lower bounds rule out the possibility of obtaining polylog(T ) rates in general. We present two variants of an algorithm based on the idea of “upper confidence bounds.” The first, due to Auer [2002], but not fully analyzed, obtains regret whose dependence on n and T are both essentially optimal, but which may be computationally intractable when the decision set is a polytope. The second version can be efficiently implemented when the decision set is a polytope (given as an intersection of half-spaces), but gives up a factor of √ n in the regret bound. Our results also extend to the setting where the set of allowed decisions may change over time. ∗Department of Computer Science, University of Chicago, varsha@cs.uchicago.edu †Toyota Technological Institute at Chicago, {hayest,sham}@tti-c.org

821 citations


Journal ArticleDOI
TL;DR: In this paper, a macroscopic fundamental diagram (MFD) relating average flow and average density must exist on any street with blocks of diverse widths and lengths, but no turns, even if all or some of the intersections are controlled by arbitrarily timed traffic signals.
Abstract: This paper shows that a macroscopic fundamental diagram (MFD) relating average flow and average density must exist on any street with blocks of diverse widths and lengths, but no turns, even if all or some of the intersections are controlled by arbitrarily timed traffic signals. The timing patterns are assumed to be fixed in time. Exact analytical expressions in terms of a shortest path recipe are given, both, for the street’s capacity and its MFD. Approximate formulas that require little data are also given. For networks, the paper derives an upper bound for average flow conditional on average density, and then suggests conditions under which the bound should be tight; i.e., under which the bound is an approximate MFD. The MFD’s produced with this method for the central business districts of San Francisco (California) and Yokohama (Japan) are compared with those obtained experimentally in earlier publications.

599 citations


Journal ArticleDOI
TL;DR: This work begins with the standard design under the assumption of a total power constraint and proves that precoders based on the pseudo-inverse are optimal among the generalized inverses in this setting, and examines individual per-antenna power constraints.
Abstract: We consider the problem of linear zero-forcing precoding design and discuss its relation to the theory of generalized inverses in linear algebra. Special attention is given to a specific generalized inverse known as the pseudo-inverse. We begin with the standard design under the assumption of a total power constraint and prove that precoders based on the pseudo-inverse are optimal among the generalized inverses in this setting. Then, we proceed to examine individual per-antenna power constraints. In this case, the pseudo-inverse is not necessarily the optimal inverse. In fact, finding the optimal matrix is nontrivial and depends on the specific performance measure. We address two common criteria, fairness and throughput, and show that the optimal generalized inverses may be found using standard convex optimization methods. We demonstrate the improved performance offered by our approach using computer simulations.

588 citations


Posted Content
TL;DR: A communication system where two transmitters wish to exchange information through a central relay is considered, using lattice codes and lattice decoding, to obtain a rate of 1/2 log(1/2 + snr) bits per transmitter, which is essentially optimal at high SNR.
Abstract: We consider the problem of two transmitters wishing to exchange information through a relay in the middle. The channels between the transmitters and the relay are assumed to be synchronized, average power constrained additive white Gaussian noise channels with a real input with signal-to-noise ratio (SNR) of snr. An upper bound on the capacity is 1/2 log(1+ snr) bits per transmitter per use of the medium-access phase and broadcast phase of the bi-directional relay channel. We show that using lattice codes and lattice decoding, we can obtain a rate of 1/2 log(0.5 + snr) bits per transmitter, which is essentially optimal at high SNRs. The main idea is to decode the sum of the codewords modulo a lattice at the relay followed by a broadcast phase which performs Slepian-Wolf coding with structured codes. For asymptotically low SNR's, jointly decoding the two transmissions at the relay (MAC channel) is shown to be optimal. We also show that if the two transmitters use identical lattices with minimum angle decoding, we can achieve the same rate of 1/2 log(0.5 + snr). The proposed scheme can be thought of as a joint physical layer, network layer code which outperforms other recently proposed analog network coding schemes.

503 citations


Journal ArticleDOI
TL;DR: It is shown that the minimum error covariance estimator is time-varying, stochastic, and it does not converge to a steady state, and the architecture is independent of the communication protocol and can be implemented using a finite memory buffer if the delivered packets have a finite maximum delay.
Abstract: In this note, we study optimal estimation design for sampled linear systems where the sensors measurements are transmitted to the estimator site via a generic digital communication network. Sensor measurements are subject to random delay or might even be completely lost. We show that the minimum error covariance estimator is time-varying, stochastic, and it does not converge to a steady state. Moreover, the architecture of this estimator is independent of the communication protocol and can be implemented using a finite memory buffer if the delivered packets have a finite maximum delay. We also present two alternative estimator architectures that are more computationally efficient and provide upper and lower bounds for the performance of the time-varying estimator. The stability of these estimators does not depend on packet delay but only on the overall packet loss probability. Finally, algorithms to compute critical packet loss probability and estimators performance in terms of their error covariance are given and applied to some numerical examples.

478 citations


Journal ArticleDOI
TL;DR: Sufficient conditions for stochastic stability of the underlying systems are derived via the linear matrix inequality (LMI) formulation, and the design of the stabilizing controller is further given.
Abstract: In this note, the stability analysis and stabilization problems for a class of discrete-time Markov jump linear systems with partially known transition probabilities and time-varying delays are investigated. The time-delay is considered to be time-varying and has a lower and upper bounds. The transition probabilities of the mode jumps are considered to be partially known, which relax the traditional assumption in Markov jump systems that all of them must be completely known a priori. Following the recent study on the class of systems, a monotonicity is further observed in concern of the conservatism of obtaining the maximal delay range due to the unknown elements in the transition probability matrix. Sufficient conditions for stochastic stability of the underlying systems are derived via the linear matrix inequality (LMI) formulation, and the design of the stabilizing controller is further given. A numerical example is used to illustrate the developed theory.

478 citations


Journal ArticleDOI
TL;DR: A new model based on the updating instants of the holder is formulated, and a linear matrix inequality (LMI)-based procedure is proposed for designing state-feedback controllers, which guarantee that the output of the closed-loop networked control system tracks theoutput of a given reference model well in the Hinfin sense.
Abstract: This paper is concerned with the problem of Hinfin output tracking for network-based control systems. The physical plant and the controller are, respectively, in continuous time and discrete time. By using a sampled-data approach, a new model based on the updating instants of the holder is formulated, and a linear matrix inequality (LMI)-based procedure is proposed for designing state-feedback controllers, which guarantee that the output of the closed-loop networked control system tracks the output of a given reference model well in the Hinfin sense. Both network-induced delays and data packet dropouts have been taken into consideration in the controller design. The network-induced delays are assumed to have both an upper bound and a lower bound, which is more general than those used in the literature. The introduction of the lower bound is shown to be advantageous for reducing conservatism. Moreover, the controller design method is further extended to more general cases, where the system matrices of the physical plant contain parameter uncertainties, represented in either polytopic or norm-bounded frameworks. Finally, an illustrative example is presented to show the usefulness and effectiveness of the proposed Hinfin output tracking design.

389 citations


Proceedings Article
08 Dec 2008
TL;DR: This work presents a reinforcement learning algorithm with total regret O(DS√AT) after T steps for any unknown MDP with S states, A actions per state, and diameter D, and proposes a new parameter: An MDP has diameter D if for any pair of states s,s' there is a policy which moves from s to s' in at most D steps.
Abstract: For undiscounted reinforcement learning in Markov decision processes (MDPs) we consider the total regret of a learning algorithm with respect to an optimal policy. In order to describe the transition structure of an MDP we propose a new parameter: An MDP has diameter D if for any pair of states s, s' there is a policy which moves from s to s' in at most D steps (on average). We present a reinforcement learning algorithm with total regret O(DS √AT) after T steps for any unknown MDP with S states, A actions per state, and diameter D. This bound holds with high probability. We also present a corresponding lower bound of Ω(√DSAT) on the total regret of any learning algorithm.

344 citations


Journal ArticleDOI
TL;DR: These bounds are used to motivate an implementable multiuser precoding strategy that combines Tomlinson-Harashima precoding at the base station and linear signal processing at the relay, adaptive stream selection, and QAM modulation.
Abstract: In this paper, a novel relaying strategy that uses multiple-input multiple-output (MIMO) fixed relays with linear processing to support multiuser transmission in cellular networks is proposed. The fixed relay processes the received signal with linear operations and forwards the processed signal to multiple users creating a multiuser MIMO relay. This paper proposes upper and lower bounds on the achievable sum rate for this architecture assuming zero-forcing dirty paper coding at the base station, neglecting the direct links from the base station to the users, and with certain structure in the relay. These bounds are used to motivate an implementable multiuser precoding strategy that combines Tomlinson-Harashima precoding at the base station and linear signal processing at the relay, adaptive stream selection, and QAM modulation. Reduced complexity algorithms based on the sum rate lower bounds are used to select a subset of users. We compare the sum rates achieved by the proposed system architecture and algorithms with the sum rate upper bound and the sum rate achieved by the decode-and-forward relaying.

343 citations


Journal ArticleDOI
TL;DR: An upper bound on the mean-square-error performance of the probabilistically quantized distributed averaging (PQDA) is derived and it is shown that the convergence of the PQDA is monotonic by studying the evolution of the minimum-length interval containing the node values.
Abstract: In this paper, we develop algorithms for distributed computation of averages of the node data over networks with bandwidth/power constraints or large volumes of data. Distributed averaging algorithms fail to achieve consensus when deterministic uniform quantization is adopted. We propose a distributed algorithm in which the nodes utilize probabilistically quantized information, i.e., dithered quantization, to communicate with each other. The algorithm we develop is a dynamical system that generates sequences achieving a consensus at one of the quantization values almost surely. In addition, we show that the expected value of the consensus is equal to the average of the original sensor data. We derive an upper bound on the mean-square-error performance of the probabilistically quantized distributed averaging (PQDA). Moreover, we show that the convergence of the PQDA is monotonic by studying the evolution of the minimum-length interval containing the node values. We reveal that the length of this interval is a monotonically nonincreasing function with limit zero. We also demonstrate that all the node values, in the worst case, converge to the final two quantization bins at the same rate as standard unquantized consensus. Finally, we report the results of simulations conducted to evaluate the behavior and the effectiveness of the proposed algorithm in various scenarios.

Journal ArticleDOI
TL;DR: The test for uniformity introduced here is based on the number of observed ldquocoincidencesrdquo (samples that fall into the same bin), the mean and variance of which may be computed explicitly for the uniform distribution and bounded nonparametrically for any distribution that is known to be epsiv-distant from uniform.
Abstract: How many independent samples N do we need from a distribution p to decide that p is epsiv-distant from uniform in an L1 sense, Sigmai=1 m |p(i) - 1/m| > epsiv? (Here m is the number of bins on which the distribution is supported, and is assumed known a priori.) Somewhat surprisingly, we only need N epsiv2 Gt m 1/2 to make this decision reliably (this condition is both sufficient and necessary). The test for uniformity introduced here is based on the number of observed ldquocoincidencesrdquo (samples that fall into the same bin), the mean and variance of which may be computed explicitly for the uniform distribution and bounded nonparametrically for any distribution that is known to be epsiv-distant from uniform. Some connections to the classical birthday problem are noted.

Journal ArticleDOI
TL;DR: This paper proposes to use summary measures of the set of possible causal effects to determine variable importance and uses the minimum absolute value of this set, since that is a lower bound on the size of the causal effect.
Abstract: We assume that we have observational data generated from an unknown underlying directed acyclic graph (DAG) model. A DAG is typically not identifiable from observational data, but it is possible to consistently estimate the equivalence class of a DAG. Moreover, for any given DAG, causal effects can be estimated using intervention calculus. In this paper, we combine these two parts. For each DAG in the estimated equivalence class, we use intervention calculus to estimate the causal effects of the covariates on the response. This yields a collection of estimated causal effects for each covariate. We show that the distinct values in this set can be consistently estimated by an algorithm that uses only local information of the graph. This local approach is computationally fast and feasible in high-dimensional problems. We propose to use summary measures of the set of possible causal effects to determine variable importance. In particular, we use the minimum absolute value of this set, since that is a lower bound on the size of the causal effect. We demonstrate the merits of our methods in a simulation study and on a data set about riboflavin production.

Journal ArticleDOI
TL;DR: A near-optimal algorithm is designed to solve the important problem of multi-hop networking with CR nodes based on a novel sequential fixing procedure, where the integer variables are determined iteratively via a sequence of linear programs.
Abstract: Cognitive radio (CR) capitalizes advances in signal processing and radio technology and is capable of reconfiguring RF and switching to desired frequency bands. It is a frequency-agile data communication device that is vastly more powerful than recently proposed multi-channel multi-radio (MC-MR) technology. In this paper, we investigate the important problem of multi-hop networking with CR nodes. For such a network, each node has a pool of frequency bands (typically of unequal size) that can be used for communication. The potential difference in the bandwidth among the available frequency bands prompts the need to further divide these bands into sub-bands for optimal spectrum sharing. We characterize the behavior and constraints for such a multi-hop CR network from multiple layers, including modeling of spectrum sharing and sub-band division, scheduling and interference constraints, and flow routing. We develop a mathematical formulation with the objective of minimizing the required network-wide radio spectrum resource for a set of user sessions. Since the formulated model is a mixed-integer non-linear program (MINLP), which is NP-hard in general, we develop a lower bound for the objective by relaxing the integer variables and using a linearization technique. Subsequently, we design a near-optimal algorithm to solve this MINLP problem. This algorithm is based on a novel sequential fixing procedure, where the integer variables are determined iteratively via a sequence of linear programs. Simulation results show that solutions obtained by this algorithm are very close to the lower bounds obtained via the proposed relaxation, thus suggesting that the solution produced by the algorithm is near-optimal.

Journal ArticleDOI
TL;DR: A new Lyapunov-Krasovskii functional, which makes use of the information of both the lower and upper bounds of the time-varying network-induced delay, is proposed to drive a new delay-dependent Hinfin stabilization criterion.
Abstract: This note is concerned with robust Hinfin control of linear networked control systems with time-varying network-induced delay and data packet dropout. A new Lyapunov-Krasovskii functional, which makes use of the information of both the lower and upper bounds of the time-varying network-induced delay, is proposed to drive a new delay-dependent Hinfin stabilization criterion. The criterion is formulated in the form of a non-convex matrix inequality, of which a feasible solution can be obtained by solving a minimization problem in terms of linear matrix inequalities. In order to obtain much less conservative results, a tighter bounding for some term is estimated. Moreover, no slack variable is introduced. Finally, two numerical examples are given to show the effectiveness of the proposed design method.

Journal ArticleDOI
TL;DR: In this article, it was shown that the effective gravitational cutoff is reduced to Lambda(G)approximate to M-Planck/root N and a new description is needed around this scale.
Abstract: In theories with a large number N of particle species, black hole physics imposes an upper bound on the mass of the species equal to M-Planck/root N. This bound suggests a novel solution to the hierarchy problem in which there are N approximate to 10(32) gravitationally coupled species, for example 10(32) copies of the standard model. The black hole bound forces them to be at the weak scale, hence providing a stable hierarchy. We present various arguments, that in such theories the effective gravitational cutoff is reduced to Lambda(G)approximate to M-Planck/root N and a new description is needed around this scale. In particular, black holes smaller than Lambda(-1)(G) are already no longer semiclassical. The nature of the completion is model dependent. One natural possibility is that Lambda(G) is the quantum gravity scale. We provide evidence that within this type of scenarios, contrary to the standard intuition, micro-black-holes have a (slowly fading) memory of the species of origin. Consequently, the black holes produced at LHC will predominantly decay into the standard model particles, and negligibly into the other species.

Journal ArticleDOI
TL;DR: In this article, the authors give an explicit expression for the rate of convergence for fully indecomposable matrices and compare the measure with some well known alternatives, including PageRank.
Abstract: As long as a square nonnegative matrix $A$ contains sufficient nonzero elements, then the Sinkhorn-Knopp algorithm can be used to balance the matrix, that is, to find a diagonal scaling of $A$ that is doubly stochastic. It is known that the convergence is linear, and an upper bound has been given for the rate of convergence for positive matrices. In this paper we give an explicit expression for the rate of convergence for fully indecomposable matrices. We describe how balancing algorithms can be used to give a measure of web page significance. We compare the measure with some well known alternatives, including PageRank. We show that, with an appropriate modification, the Sinkhorn-Knopp algorithm is a natural candidate for computing the measure on enormous data sets.

Journal ArticleDOI
TL;DR: This paper considers three kinds of deployments for a sensor network on a unit square—a √n×√n grid, random uniform (for all n points, and Poisson) and claims that the critical value of the function npπ r2/log (np) is 1 for the event of k-coverage of every point.
Abstract: Sensor networks are often desired to last many times longer than the active lifetime of individual sensors. This is usually achieved by putting sensors to sleep for most of their lifetime. On the other hand, event monitoring applications require guaranteed k-coverage of the protected region at all times. As a result, determining the appropriate number of sensors to deploy that achieves both goals simultaneously becomes a challenging problem. In this paper, we consider three kinds of deployments for a sensor network on a unit square--a √n × √n grid, random uniform (for all n points), and Poisson (with density n). In all three deployments, each sensor is active with probability p, independently from the others. Then, we claim that the critical value of the function npπ r2/ log(np) is 1 for the event of k-coverage of every point. We also provide an upper bound on the window of this phase transition. Although the conditions for the three deployments are similar, we obtain sharper bounds for the random deployments than the grid deployment, which occurs due to the boundary condition. In this paper, we also provide corrections to previously published results. Finally, we use simulation to show the usefulness of our analysis in real deployment scenarios.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the size of a Kakeya set is at least the dimension of the space of polynomials of degree q 2, which is q71'1 when q is large.
Abstract: The motivation for studying Kakeya sets over finite fields is to try to better understand the more complicated questions regarding Kakeya sets in W1. A Kakeya set K C Rn is a compact set containing a line segment of unit length in every direction. The famous Kakeya Conjecture states that such sets must have Hausdorff (or Minkowski) dimension equal to n. The importance of this conjecture is partially due to the connections it has to many problems in harmonic analysis, number theory and PDE. This conjecture was proved for n = 2 [Dav71] and is open for larger values of n (we refer the reader to the survey papers [Wol99, BouOO, TaoOl] for more information). It was first suggested by Wolff [Wol99] to study finite field Kakeya sets. It was asked in [Wol99] whether there exists a lower bound of the form Cn • qn on the size of such sets in Fn. The lower bound appearing in [Wol99] was of the form Cn c(n+2)/2. This bound was further improved in [RogOl, BKT04, MT04, Tao08] both for general n and for specific small values of n (e.g. for n = 3, 4). For general n, the most current best lower bound is the one obtained in [RogOl, MT04] (based on results from [KT99]) of Cn • q4n/7 . The main technique used to show this bound is an additive number theoretic lemma relating the sizes of different sum sets of the form A+rB, where A and B are fixed sets in Fn and r ranges over several different values in F (the idea to use additive number theory in the context of Kakeya sets is due to Bourgain [Bou99]). The next theorem, proven in Section 2, gives a near-optimal bound on the size of Kakeya sets. Roughly speaking, the proof follows by observing that any degree q 2 homogeneous polynomial in F[#i, . . . , xn] can be 'reconstructed' from its value on any Kakeya set K c¥n. This implies that the size of K is at least the dimension of the space of polynomials of degree q 2, which is « q71'1 (when q is large).

Journal ArticleDOI
TL;DR: In this article, the Lomb-Scargle periodogram is used to search for periodicities in observational data and the problem of assessing the statistical significance of candidate periodicities for a number of periodograms is considered.
Abstract: The least-squares (or Lomb-Scargle) periodogram is a powerful tool that is routinely used in many branches of astronomy to search for periodicities in observational data. The problem of assessing the statistical significance of candidate periodicities for a number of periodograms is considered. Based on results in extreme value theory, improved analytic estimations of false alarm probabilities are given. These include an upper limit to the false alarm probability (or a lower limit to the significance). The estimations are tested numerically in order to establish regions of their practical applicability.

Journal ArticleDOI
TL;DR: In this article, it was shown that away from the spectral edges, the density of eigenvalues concentrates around the Wigner semicircle law on energy scales of order 1/N.
Abstract: We consider $N\times N$ Hermitian random matrices with independent identical distributed entries. The matrix is normalized so that the average spacing between consecutive eigenvalues is of order 1/N. Under suitable assumptions on the distribution of the single matrix element, we prove that, away from the spectral edges, the density of eigenvalues concentrates around the Wigner semicircle law on energy scales $\eta \gg N^{-1} (\log N)^8$. Up to the logarithmic factor, this is the smallest energy scale for which the semicircle law may be valid. We also prove that for all eigenvalues away from the spectral edges, the $\ell^\infty$-norm of the corresponding eigenvectors is of order $O(N^{-1/2})$, modulo logarithmic corrections. The upper bound $O(N^{-1/2})$ implies that every eigenvector is completely delocalized, i.e., the maximum size of the components of the eigenvector is of the same order as their average size. In the Appendix, we include a lemma by J. Bourgain which removes one of our assumptions on the distribution of the matrix elements.

Journal ArticleDOI
TL;DR: In this article, the authors present a systematic way to solve the right-handed quark mixing matrix analytically in this model and find that the leading order solution has the same hierarchical structure as the left-handed CKM matrix with one more CP-violating phase coming from the complex Higgs vev.

Book ChapterDOI
19 Mar 2008
TL;DR: This paper suggests coalition-resilient secret sharing and SMPC protocols with the property that after any sequence of iterations it is still a computational best response to follow them, and are immune to backward induction.
Abstract: The goal of this paper is finding fair protocols for the secret sharing and secure multiparty computation (SMPC) problems, when players are assumed to be rational. It was observed by Halpern and Teague (STOC 2004) that protocols with bounded number of iterations are susceptible to backward induction and cannot be considered rational. Previously suggested cryptographic solutions all share the property of having an essential exponential upper bound on their running time, and hence they are also susceptible to backward induction. Although it seems that this bound is an inherent property of every cryptography based solution, we show that this is not the case. We suggest coalition-resilient secret sharing and SMPC protocols with the property that after any sequence of iterations it is still a computational best response to follow them. Therefore, the protocols can be run any number of iterations, and are immune to backward induction. The mean of communication assumed is a broadcast channel, and we consider both the simultaneous and non-simultaneous cases.

Journal ArticleDOI
TL;DR: This paper model the CTC problem as a maximum cover tree (MCT) problem, determines an upper bound on the network lifetime for the MCT problem and develops a (1+w)H(M circ) approximation algorithm to solve it, which shows that the lifetime obtained is close to the upper bound.
Abstract: In this paper, we consider the connected target coverage (CTC) problem with the objective of maximizing the network lifetime by scheduling sensors into multiple sets, each of which can maintain both target coverage and connectivity among all the active sensors and the sink. We model the CTC problem as a maximum cover tree (MCT) problem and prove that the MCT problem is NP-Complete. We determine an upper bound on the network lifetime for the MCT problem and then develop a (1+w)H(M circ) approximation algorithm to solve it, where w is an arbitrarily small number, H(M circ)=1 lesilesM circ(1/i) and M circ is the maximum number of targets in the sensing area of any sensor. As the protocol cost of the approximation algorithm may be high in practice, we develop a faster heuristic algorithm based on the approximation algorithm called Communication Weighted Greedy Cover (CWGC) algorithm and present a distributed implementation of the heuristic algorithm. We study the performance of the approximation algorithm and CWGC algorithm by comparing them with the lifetime upper bound and other basic algorithms that consider the coverage and connectivity problems independently. Simulation results show that the approximation algorithm and CWGC algorithm perform much better than others in terms of the network lifetime and the performance improvement can be up to 45% than the best-known basic algorithm. The lifetime obtained by our algorithms is close to the upper bound. Compared with the approximation algorithm, the CWGC algorithm can achieve a similar performance in terms of the network lifetime with a lower protocol cost.

Proceedings Article
14 Sep 2008
TL;DR: This paper establishes an upper bound on the complexity of multi-agent planning problems that depends exponentially on two parameters quantifying the level of agents' coupling, and on these parameters only.
Abstract: Loosely coupled multi-agent systems are perceived as easier to plan for because they require less coordination between agent sub-plans. In this paper we set out to formalize this intuition. We establish an upper bound on the complexity of multi-agent planning problems that depends exponentially on two parameters quantifying the level of agents' coupling, and on these parameters only. The first parameter is problem-independent, and it measures the inherent level of coupling within the system. The second is problem-specific and it has to do with the minmax number of action-commitments per agent required to solve the problem. Most importantly, the direct dependence on the number of agents, on the overall size of the problem, and on the length of the agents' plans, is only polynomial. This result is obtained using a new algorithmic methodology which we call "planning as CSP+planning". We believe this to be one of the first formal results to both quantify the notion of agents' coupling, and to demonstrate a multi-agent planning algorithm that, for fixed coupling levels, scales polynomially with the size of the problem.

Journal ArticleDOI
TL;DR: It is shown that random codes are asymptotically optimal in the sense that they achieve the minimum possible distortion in probability as n and the code rate approach infinity linearly.
Abstract: The Grassmann manifold Gn,p (L) is the set of all p-dimensional planes (through the origin) in the n-dimensional Euclidean space Ln, where L is either R or C. This paper considers the quantization problem in which a source in Gn,p (L) is quantized through a code in Gn,q (L), with p and q not necessarily the same. The analysis is based on the volume of a metric ball in Gn,p (L) with center in Gn,q (L), and our chief result is a closed-form expression for the volume of a metric ball of radius at most one. This volume formula holds for arbitrary n, p, q, and L, while previous results pertained only to some special cases. Based on this volume formula, several bounds are derived for the rate-distortion tradeoff assuming that the quantization rate is sufficiently high. The lower and upper bounds on the distortion rate function are asymptotically identical, and therefore precisely quantify the asymptotic rate-distortion tradeoff. We also show that random codes are asymptotically optimal in the sense that they achieve the minimum possible distortion in probability as n and the code rate approach infinity linearly. Finally, as an application of the derived results to communication theory, we quantify the effect of beamforming matrix selection in multiple-antenna communication systems with finite rate channel state feedback.

Journal ArticleDOI
TL;DR: An analysis of properties of counterdiabatic fields (CDFs) that restore the adiabatic dynamics of a system by suppressing diabatic effects as they are generated are reported.
Abstract: The control of population transfer can be affected by the adiabatic evolution of a system under the influence of an applied field. If the field is too rapidly varying or too weak, the conditions for adiabatic transfer are not satisfactorily met. We report the results of an analysis of properties of counterdiabatic fields (CDFs) that restore the adiabatic dynamics of a system by suppressing diabatic effects as they are generated. We observe that a CDF is not unique and find the one that has minimum intensity, and we provide natural upper and lower bounds to the integrated intensity of a CDF in terms of integrals of the eigenvalues of the system Hamiltonian. For Hamiltonians that are separable with respect to their parameters, we prove that the time integral of an associated CDF is path independent. Finally we explain why and when, in the neighborhood of an avoided crossing, a CDF can be approximated by Lorentzian pulses.

Journal ArticleDOI
TL;DR: LMI-based synthesis tools for regional stability and performance of linear anti-windup compensators for linear control systems are presented and it is shown that for systems whose plants have poles in the closed left-half plane, plant-order dynamic anti- windup can achieve semiglobal exponential stability and finite L"2 gain for exogenous inputs with L" 2 norm bounded by any finite value.

Journal ArticleDOI
TL;DR: The results imply that randomized fingerprint codes over a binary alphabet are as powerful as over an arbitrary alphabet and the equal strength of two distinct models for fingerprinting.
Abstract: We construct binary codes for fingerprinting digital documents. Our codes for n users that are e-secure against c pirates have length O(c2log(n/e)). This improves the codes proposed by Boneh and Shaw l1998r whose length is approximately the square of this length. The improvement carries over to works using the Boneh--Shaw code as a primitive, for example, to the dynamic traitor tracing scheme of Tassa l2005r.By proving matching lower bounds we establish that the length of our codes is best within a constant factor for reasonable error probabilities. This lower bound generalizes the bound found independently by Peikert et al. l2003r that applies to a limited class of codes. Our results also imply that randomized fingerprint codes over a binary alphabet are as powerful as over an arbitrary alphabet and the equal strength of two distinct models for fingerprinting.

Journal ArticleDOI
TL;DR: In this paper, an alpha finite element method (αFEM) was proposed for computing nearly exact solution in energy norm for mechanics problems using meshes that can be generated automatically for arbitrarily complicated domains.