scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 2005"


Proceedings Article
01 Jan 2005
TL;DR: This paper develops a modification of the recent technique proposed by Wainwright et al. (Nov. 2005), called sequential tree-reweighted message passing, which outperforms both the ordinary belief propagation and tree- reweighted algorithm in both synthetic and real problems.
Abstract: Algorithms for discrete energy minimization are of fundamental importance in computer vision. In this paper, we focus on the recent technique proposed by Wainwright et al. (Nov. 2005)- tree-reweighted max-product message passing (TRW). It was inspired by the problem of maximizing a lower bound on the energy. However, the algorithm is not guaranteed to increase this bound - it may actually go down. In addition, TRW does not always converge. We develop a modification of this algorithm which we call sequential tree-reweighted message passing. Its main property is that the bound is guaranteed not to decrease. We also give a weak tree agreement condition which characterizes local maxima of the bound with respect to TRW algorithms. We prove that our algorithm has a limit point that achieves weak tree agreement. Finally, we show that, our algorithm requires half as much memory as traditional message passing approaches. Experimental results demonstrate that on certain synthetic and real problems, our algorithm outperforms both the ordinary belief propagation and tree-reweighted algorithm in (M. J. Wainwright, et al., Nov. 2005). In addition, on stereo problems with Potts interactions, we obtain a lower energy than graph cuts

1,172 citations


Journal ArticleDOI
TL;DR: It is somewhat surprising that the upper bound can meet the lower bound under certain regularity conditions (not necessarily degradedness), and therefore the capacity can be characterized exactly; previously this has been proven only for the degraded Gaussian relay channel.
Abstract: We study the capacity of multiple-input multiple- output (MIMO) relay channels. We first consider the Gaussian MIMO relay channel with fixed channel conditions, and derive upper bounds and lower bounds that can be obtained numerically by convex programming. We present algorithms to compute the bounds. Next, we generalize the study to the Rayleigh fading case. We find an upper bound and a lower bound on the ergodic capacity. It is somewhat surprising that the upper bound can meet the lower bound under certain regularity conditions (not necessarily degradedness), and therefore the capacity can be characterized exactly; previously this has been proven only for the degraded Gaussian relay channel. We investigate sufficient conditions for achieving the ergodic capacity; and in particular, for the case where all nodes have the same number of antennas, the capacity can be achieved under certain signal-to-noise ratio (SNR) conditions. Numerical results are also provided to illustrate the bounds on the ergodic capacity of the MIMO relay channel over Rayleigh fading. Finally, we present a potential application of the MIMO relay channel for cooperative communications in ad hoc networks.

878 citations


Journal ArticleDOI
TL;DR: This work develops and analyze methods for computing provably optimal maximum a posteriori probability (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles and establishes a connection between a certain LP relaxation of the mode-finding problem and a reweighted form of the max-product (min-sum) message-passing algorithm.
Abstract: We develop and analyze methods for computing provably optimal maximum a posteriori probability (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles. By decomposing the original distribution into a convex combination of tree-structured distributions, we obtain an upper bound on the optimal value of the original problem (i.e., the log probability of the MAP assignment) in terms of the combined optimal values of the tree problems. We prove that this upper bound is tight if and only if all the tree distributions share an optimal configuration in common. An important implication is that any such shared configuration must also be a MAP configuration for the original distribution. Next we develop two approaches to attempting to obtain tight upper bounds: a) a tree-relaxed linear program (LP), which is derived from the Lagrangian dual of the upper bounds; and b) a tree-reweighted max-product message-passing algorithm that is related to but distinct from the max-product algorithm. In this way, we establish a connection between a certain LP relaxation of the mode-finding problem and a reweighted form of the max-product (min-sum) message-passing algorithm.

770 citations


Journal ArticleDOI
TL;DR: Upper and lower bounds on the transmission capacity of spread-spectrum (SS) wireless ad hoc networks are derived and it can be shown that FH-CDMA obtains a higher transmission capacity on the order of M/sup 1-2//spl alpha//, where M is the spreading factor and /spl alpha/>2 is the path loss exponent.
Abstract: In this paper, upper and lower bounds on the transmission capacity of spread-spectrum (SS) wireless ad hoc networks are derived. We define transmission capacity as the product of the maximum density of successful transmissions multiplied by their data rate, given an outage constraint. Assuming that the nodes are randomly distributed in space according to a Poisson point process, we derive upper and lower bounds for frequency hopping (FH-CDMA) and direct sequence (DS-CDMA) SS networks, which incorporate traditional modulation types (no spreading) as a special case. These bounds cleanly summarize how ad hoc network capacity is affected by the outage probability, spreading factor, transmission power, target signal-to-noise ratio (SNR), and other system parameters. Using these bounds, it can be shown that FH-CDMA obtains a higher transmission capacity than DS-CDMA on the order of M/sup 1-2//spl alpha//, where M is the spreading factor and /spl alpha/>2 is the path loss exponent. A tangential contribution is an (apparently) novel technique for obtaining tight bounds on tail probabilities of additive functionals of homogeneous Poisson point processes.

627 citations


Journal ArticleDOI
08 Dec 2005
TL;DR: A novel method for exactly solving general constraint satisfaction optimization with at most two variables per constraint with the first exponential improvement over the trivial algorithm, which yields connections between the complexity of some (polynomial time) high-dimensional search problems and some NP-hard problems.
Abstract: We present a novel method for exactly solving (in fact, counting solutions to) general constraint satisfaction optimization with at most two variables per constraint (e.g. MAX-2-CSP and MIN-2-CSP), which gives the first exponential improvement over the trivial algorithm. More precisely, the runtime bound is a constant factor improvement in the base of the exponent: the algorithm can count the number of optima in MAX-2-SAT and MAX-CUT instances in O(m32ωn/3) time, where ω < 2.376 is the matrix product exponent over a ring. When the constraints have arbitrary weights, there is a (1 + e)-approximation with roughly the same runtime, modulo polynomial factors. Our construction shows that improvement in the runtime exponent of either k-clique solution (even when k = 3) or matrix multiplication over GF(2) would improve the runtime exponent for solving 2-CSP optimization.Our approach also yields connections between the complexity of some (polynomial time) high-dimensional search problems and some NP-hard problems. For example, if there are sufficiently faster algorithms for computing the diameter of n points in l1, then there is an (2 - e)n algorithm for MAX-LIN. These results may be construed as either lower bounds on the high-dimensional problems, or hope that better algorithms exist for the corresponding hard problems.

508 citations


Journal ArticleDOI
TL;DR: A new class of upper bounds on the log partition function of a Markov random field (MRF) is introduced, based on concepts from convex duality and information geometry, and the Legendre mapping between exponential and mean parameters is exploited.
Abstract: We introduce a new class of upper bounds on the log partition function of a Markov random field (MRF). This quantity plays an important role in various contexts, including approximating marginal distributions, parameter estimation, combinatorial enumeration, statistical decision theory, and large-deviations bounds. Our derivation is based on concepts from convex duality and information geometry: in particular, it exploits mixtures of distributions in the exponential domain, and the Legendre mapping between exponential and mean parameters. In the special case of convex combinations of tree-structured distributions, we obtain a family of variational problems, similar to the Bethe variational problem, but distinguished by the following desirable properties: i) they are convex, and have a unique global optimum; and ii) the optimum gives an upper bound on the log partition function. This optimum is defined by stationary conditions very similar to those defining fixed points of the sum-product algorithm, or more generally, any local optimum of the Bethe variational problem. As with sum-product fixed points, the elements of the optimizing argument can be used as approximations to the marginals of the original model. The analysis extends naturally to convex combinations of hypertree-structured distributions, thereby establishing links to Kikuchi approximations and variants.

498 citations


Journal ArticleDOI
TL;DR: An upper bound on the capacity that can be expressed as the sum of the logarithms of ordered chi-square-distributed variables is derived and evaluated analytically and compared to the results obtained by Monte Carlo simulations.
Abstract: We consider the capacity of multiple-input multiple-output systems with reduced complexity. One link-end uses all available antennas, while the other chooses the L out of N antennas that maximize capacity. We derive an upper bound on the capacity that can be expressed as the sum of the logarithms of ordered chi-square-distributed variables. This bound is then evaluated analytically and compared to the results obtained by Monte Carlo simulations. Our results show that the achieved capacity is close to the capacity of a full-complexity system provided that L is at least as large as the number of antennas at the other link-end. For example, for L = 3, N = 8 antennas at the receiver and three antennas at the transmitter, the capacity of the reduced-complexity scheme is 20 bits/s/Hz compared to 23 bits/s/Hz of a full-complexity scheme. We also present a suboptimum antenna subset selection algorithm that has a complexity of N/sup 2/ compared to the optimum algorithm with a complexity of (N/sub L/).

494 citations


Journal ArticleDOI
TL;DR: This work constructs analytically optimal codebooks meeting the Welch lower bound, and develops an efficient numerical search method based on a generalized Lloyd algorithm that leads to considerable improvement on the achieved I/sub max/ over existing alternatives.
Abstract: Consider a codebook containing N unit-norm complex vectors in a K-dimensional space. In a number of applications, the codebook that minimizes the maximal cross-correlation amplitude (I/sub max/) is often desirable. Relying on tools from combinatorial number theory, we construct analytically optimal codebooks meeting, in certain cases, the Welch lower bound. When analytical constructions are not available, we develop an efficient numerical search method based on a generalized Lloyd algorithm, which leads to considerable improvement on the achieved I/sub max/ over existing alternatives. We also derive a composite lower bound on the minimum achievable I/sub max/ that is effective for any codebook size N.

445 citations


Journal ArticleDOI
TL;DR: The goal of this paper is to provide a systematic analysis of the convergence rate achieved by the multiplicative and additive half-quadratic regularizations, and determine their upper bounds for their root-convergence factors.
Abstract: We address the minimization of regularized convex cost functions which are customarily used for edge-preserving restoration and reconstruction of signals and images. In order to accelerate computation, the multiplicative and the additive half-quadratic reformulation of the original cost-function have been pioneered in Geman and Reynolds [IEEE Trans. Pattern Anal. Machine Intelligence, 14 (1992), pp. 367--383] and Geman and Yang IEEE Trans. Image Process., 4 (1995), pp. 932--946]. The alternate minimization of the resultant (augmented) cost-functions has a simple explicit form. The goal of this paper is to provide a systematic analysis of the convergence rate achieved by these methods. For the multiplicative and additive half-quadratic regularizations, we determine their upper bounds for their root-convergence factors. The bound for the multiplicative form is seen to be always smaller than the bound for the additive form. Experiments show that the number of iterations required for convergence for the multiplicative form is always less than that for the additive form. However, the computational cost of each iteration is much higher for the multiplicative form than for the additive form. The global assessment is that minimization using the additive form of half-quadratic regularization is faster than using the multiplicative form. When the additive form is applicable, it is hence recommended. Extensive experiments demonstrate that in our MATLAB implementation, both methods are substantially faster (in terms of computational times) than the standard MATLAB Optimization Toolbox routines used in our comparison study.

417 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that if the Dirichlet norm is replaced by the standard Sobolev norm, then the supremum of ∫ Ω e 4 π u 2 dx over all such functions is uniformly bounded, independently of the domain Ω.
Abstract: The classical Trudinger–Moser inequality says that for functions with Dirichlet norm smaller or equal to 1 in the Sobolev space H 0 1 ( Ω ) (with Ω ⊂ R 2 a bounded domain), the integral ∫ Ω e 4 π u 2 dx is uniformly bounded by a constant depending only on Ω . If the volume | Ω | becomes unbounded then this bound tends to infinity, and hence the Trudinger–Moser inequality is not available for such domains (and in particular for R 2 ). In this paper, we show that if the Dirichlet norm is replaced by the standard Sobolev norm, then the supremum of ∫ Ω e 4 π u 2 dx over all such functions is uniformly bounded, independently of the domain Ω . Furthermore, a sharp upper bound for the limits of Sobolev normalized concentrating sequences is proved for Ω = B R , the ball or radius R, and for Ω = R 2 . Finally, the explicit construction of optimal concentrating sequences allows to prove that the above supremum is attained on balls B R ⊂ R 2 and on R 2 .

412 citations


Journal ArticleDOI
TL;DR: This paper provides a new example where a simple cut-set upper bound is achievable, and one more example where uncoded transmission achieves optimal performance in a network joint source-channel coding problem.
Abstract: The capacity of a particular large Gaussian relay network is determined in the limit as the number of relays tends to infinity. Upper bounds are derived from cut-set arguments, and lower bounds follow from an argument involving uncoded transmission. It is shown that in cases of interest, upper and lower bounds coincide in the limit as the number of relays tends to infinity. Hence, this paper provides a new example where a simple cut-set upper bound is achievable, and one more example where uncoded transmission achieves optimal performance. The findings are illustrated by geometric interpretations. The techniques developed in this paper are then applied to a sensor network situation. This is a network joint source-channel coding problem, and it is well known that the source-channel separation theorem does not extend to this case. The present paper extends this insight by providing an example where separating source from channel coding does not only lead to suboptimal performance-it leads to an exponential penalty in performance scaling behavior (as a function of the number of nodes). Finally, the techniques developed in this paper are extended to include certain models of ad hoc wireless networks, where a capacity scaling law can be established: When all nodes act purely as relays for a single source-destination pair, capacity grows with the logarithm of the number of nodes.

Journal ArticleDOI
TL;DR: A tight lower bound for the minimum node density that is necessary to obtain an almost surely connected subnetwork on a bounded area of given size is given.
Abstract: This article analyzes the connectivity of multihop radio networks in a log-normal shadow fading environment. Assuming the nodes have equal transmission capabilities and are randomly distributed according to a homogeneous Poisson process, we give a tight lower bound for the minimum node density that is necessary to obtain an almost surely connected subnetwork on a bounded area of given size. We derive an explicit expression for this bound, compute it in a variety of scenarios, and verify its tightness by simulation. The numerical results can be used for the practical design and simulation of wireless sensor and ad hoc networks. In addition, they give insight into how fading affects the topology of multihop networks. It is explained why a high fading variance helps the network to become connected.

Journal ArticleDOI
TL;DR: The requirement of stability despite the destabilizing effect of pressure yields a lower bound on the number of extra contact per particle deltaz:deltaz> or =p1/2, which generalizes the Maxwell criterion for rigidity when pressure is present.
Abstract: Glasses have a large excess of low-frequency vibrational modes in comparison with most crystalline solids. We show that such a feature is a necessary consequence of the weak connectivity of the solid, and that the frequency of modes in excess is very sensitive to the pressure. We analyze, in particular, two systems whose density $D(\ensuremath{\omega})$ of vibrational modes of angular frequency $\ensuremath{\omega}$ display scaling behaviors with the packing fraction: (i) simulations of jammed packings of particles interacting through finite-range, purely repulsive potentials, comprised of weakly compressed spheres at zero temperature and (ii) a system with the same network of contacts, but where the force between any particles in contact (and therefore the total pressure) is set to zero. We account in the two cases for the observed (a) convergence of $D(\ensuremath{\omega})$ toward a nonzero constant as $\ensuremath{\omega}\ensuremath{\rightarrow}0$, (b) appearance of a low-frequency cutoff ${\ensuremath{\omega}}^{*}$, and (c) power-law increase of ${\ensuremath{\omega}}^{*}$ with compression. Differences between these two systems occur at a lower frequency. The density of states of the modified system displays an abrupt plateau that appears at ${\ensuremath{\omega}}^{*}$, below which we expect the system to behave as a normal, continuous, elastic body. In the unmodified system, the pressure lowers the frequency of the modes in excess. The requirement of stability despite the destabilizing effect of pressure yields a lower bound on the number of extra contact per particle $\ensuremath{\delta}z:\ensuremath{\delta}z\ensuremath{\geqslant}{p}^{1∕2}$, which generalizes the Maxwell criterion for rigidity when pressure is present. This scaling behavior is observed in the simulations. We finally discuss how the cooling procedure can affect the microscopic structure and the density of normal modes.

Journal ArticleDOI
TL;DR: In this paper, a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the fluctuation theorem concerning the probability of second law violating phase-space paths is clarified.
Abstract: Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results.

Journal ArticleDOI
TL;DR: The Leray-α model as discussed by the authors is inspired by the Lagrangian averaged Navier-Stokes-α (LSA) model of turbulence, and is shown to be a good subgrid-scale large-eddy simulation model of turbulent boundary layers.
Abstract: In this paper we introduce and study a new model for three–dimensional turbulence, the Leray– α model. This model is inspired by the Lagrangian averaged Navier–Stokes– α model of turbulence (also known Navier–Stokes– α model or the viscous Camassa–Holm equations). As in the case of the Lagrangian averaged Navier–Stokes– α model, the Leray– α model compares successfully with empirical data from turbulent channel and pipe flows, for a wide range of Reynolds numbers. We establish here an upper bound for the dimension of the global attractor (the number of degrees of freedom) of the Leray– α model of the order of ( L / l d ) 12/7 , where L is the size of the domain and l d is the dissipation length–scale. This upper bound is much smaller than what one would expect for three–dimensional models, i.e. ( L / l d ) 3 . This remarkable result suggests that the Leray– α model has a great potential to become a good sub–grid–scale large–eddy simulation model of turbulence. We support this observation by studying, analytically and computationally, the energy spectrum and show that in addition to the usual k −5/3 Kolmogorov power law the inertial range has a steeper power–law spectrum for wavenumbers larger than 1/ α . Finally, we propose a Prandtl–like boundary–layer model, induced by the Leray– α model, and show a very good agreement of this model with empirical data for turbulent boundary layers.

Posted Content
TL;DR: In this paper, a new version of the quantum threshold theorem that applies to concatenation of a quantum code that corrects only one error is presented, and a rigorous lower bound on the quantum accuracy threshold epsilon_0.73 is derived from a computer assisted combinatorial analysis.
Abstract: We prove a new version of the quantum threshold theorem that applies to concatenation of a quantum code that corrects only one error, and we use this theorem to derive a rigorous lower bound on the quantum accuracy threshold epsilon_0. Our proof also applies to concatenation of higher-distance codes, and to noise models that allow faults to be correlated in space and in time. The proof uses new criteria for assessing the accuracy of fault-tolerant circuits, which are particularly conducive to the inductive analysis of recursive simulations. Our lower bound on the threshold, epsilon_0 > 2.73 \times 10^{-5} for an adversarial independent stochastic noise model, is derived from a computer-assisted combinatorial analysis; it is the best lower bound that has been rigorously proven so far.

Book
01 Jan 2005
TL;DR: In this paper, the Bernoulli Conjecture and families of distances have been used in the application of Gaussian Processes and Related Structures to Banach Space Theory.
Abstract: Overview and Basic Facts.- Gaussian Processes and Related Structures.- Matching Theorems.- The Bernoulli Conjecture.- Families of distances.- Applications to Banach Space Theory.

Journal ArticleDOI
TL;DR: In this article, a new upper bound formulation of limit analysis of two and three-dimensional solids is presented, which is formulated in terms of stresses rather than velocities and plastic multipliers, and by means of duality theory it is shown that the formulation does indeed result in rigorous upper bound solutions.
Abstract: SUMMARY A new upper bound formulation of limit analysis of two- and three-dimensional solids is presented. In contrast to most discrete upper bound methods the present one is formulated in terms of stresses rather than velocities and plastic multipliers. However, by means of duality theory it is shown that the formulation does indeed result in rigorous upper bound solutions. Also, kinematically admissible discontinuities, which have previously been shown to be very efficient, are given an interpretation in terms of stresses. This allows for a much simpler implementation and, in contrast to existing formulations, extension to arbitrary yield criteria in two and three dimensions is straightforward. Finally, the capabilities of the new method are demonstrated through a number of examples. Copyright 2005 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
22 May 2005
TL;DR: The algorithm also works for streams with deletions and thus gives an 1-pass 1- pass space algorithm for computing the k-th frequency moment of a data stream for any real k > 2 and the update time of the algorithms is 1.
Abstract: We give a 1-pass O(m1-2⁄k)-space algorithm for computing the k-th frequency moment of a data stream for any real k > 2. Together with the lower bounds of [1, 2, 4], this resolves the main problem left open by Alon et al in 1996 [1]. Our algorithm also works for streams with deletions and thus gives an O(m1-2⁄p) space algorithm for the Lp difference problem for any p > 2. This essentially matches the known Ω(m1-2⁄p-o(1)) lower bound of [12, 2]. Finally the update time of our algorithms is O(1).

Journal ArticleDOI
TL;DR: A simple new randomized algorithm, called ResolveSat, for finding satisfying assignments of Boolean formulas in conjunctive normal form, which is the fastest known probabilistic algorithm for k-CNF satisfiability and proves a lower bound on the number of codewords of a code defined by a k-C NF.
Abstract: We propose and analyze a simple new randomized algorithm, called ResolveSat, for finding satisfying assignments of Boolean formulas in conjunctive normal form. The algorithm consists of two stages: a preprocessing stage in which resolution is applied to enlarge the set of clauses of the formula, followed by a search stage that uses a simple randomized greedy procedure to look for a satisfying assignment. Currently, this is the fastest known probabilistic algorithm for k-CNF satisfiability for k g 4 (with a running time of O(20.5625n) for 4-CNF). In addition, it is the fastest known probabilistic algorithm for k-CNF, k g 3, that have at most one satisfying assignment (unique k-SAT) (with a running time O(2(2 ln 2 − 1)n p o(n)) = O(20.386 … n) in the case of 3-CNF). The analysis of the algorithm also gives an upper bound on the number of the codewords of a code defined by a k-CNF. This is applied to prove a lower bounds on depth 3 circuits accepting codes with nonconstant distance. In particular we prove a lower bound Ω(21.282…√>i/ii/ii/i

Journal ArticleDOI
TL;DR: A Gaussian orthogonal relay model is investigated, and it is shown that when the relay-to-destination signal- to-noise ratio (SNR) is less than a certain threshold, the capacity at the optimizing /spl theta/ is also the maximum capacity of the channel over all possible resource allocation parameters.
Abstract: A Gaussian orthogonal relay model is investigated, where the source transmits to the relay and destination in channel 1, and the relay transmits to the destination in channel 2, with channels 1 and 2 being orthogonalized in the time-frequency plane in order to satisfy practical constraints. The total available channel resource (time and bandwidth) is split into the two orthogonal channels, and the resource allocation to the two channels is considered to be a design parameter that needs to be optimized. The main focus of the analysis is on the case where the source-to-relay link is better than the source-to-destination link, which is the usual scenario encountered in practice. A lower bound on the capacity (achievable rate) is derived, and optimized over the parameter /spl theta/, which represents the fraction of the resource assigned to channel 1. It is shown that the lower bound achieves the max-flow min-cut upper bound at the optimizing /spl theta/, the common value thus being the capacity of the channel at the optimizing /spl theta/. Furthermore, it is shown that when the relay-to-destination signal-to-noise ratio (SNR) is less than a certain threshold, the capacity at the optimizing /spl theta/ is also the maximum capacity of the channel over all possible resource allocation parameters /spl theta/. Finally, the achievable rates for optimal and equal resource allocations are compared, and it is shown that optimizing the resource allocation yields significant performance gains.

Journal ArticleDOI
TL;DR: Stability results for unconstrained discrete-time nonlinear systems controlled using finite-horizon model predictive control algorithms that do not require the terminal cost to be a local control Lyapunov function are presented.
Abstract: We present stability results for unconstrained discrete-time nonlinear systems controlled using finite-horizon model predictive control (MPC) algorithms that do not require the terminal cost to be a local control Lyapunov function. The two key assumptions we make are that the value function is bounded by a K/sub /spl infin// function of a state measure related to the distance of the state to the target set and that this measure is detectable from the stage cost. We show that these assumptions are sufficient to guarantee closed-loop asymptotic stability that is semiglobal and practical in the horizon length and robust to small perturbations. If the assumptions hold with linear (or locally linear) K/sub /spl infin// functions, then the stability will be global (or semiglobal) for long enough horizon lengths. In the global case, we give an explicit formula for a sufficiently long horizon length. We relate the upper bound assumption to exponential and asymptotic controllability. Using terminal and stage costs that are controllable to zero with respect to a state measure, we can guarantee the required upper bound, but we also require that the state measure be detectable from the stage cost to ensure stability. While such costs and state measures may not be easy to construct in general, we explore a class of systems, called homogeneous systems, for which it is straightforward to choose them. In fact, we show for homogeneous systems that the associated K/sub /spl infin// functions are linear, thereby guaranteeing global asymptotic stability. We discuss two examples found elsewhere in the MPC literature, including the discrete-time nonholonomic integrator, to demonstrate our methods. For these systems, we give a new result: They can be globally asymptotically stabilized by a finite-horizon MPC algorithm that has guaranteed robustness. We also show that stable linear systems with control constraints can be globally exponentially stabilized using finite-horizon MPC without requiring the terminal cost to be a global control Lyapunov function.

Journal ArticleDOI
TL;DR: An information-theoretic upper bound on the rate per communication pair in a large ad hoc wireless network is derived and it is shown that under minimal conditions on the attenuation due to the environment and for networks with a constant density of users, this rate tends to zero as the number of users gets large.
Abstract: We derive an information-theoretic upper bound on the rate per communication pair in a large ad hoc wireless network. We show that under minimal conditions on the attenuation due to the environment and for networks with a constant density of users, this rate tends to zero as the number of users gets large.

Journal ArticleDOI
TL;DR: An analytical lower bound for the concurrence of a bipartite quantum state in arbitrary dimension is derived relating concurrence, the Peres-Horodecki criterion, and the realignment criterion and is demonstrated that it is exact for some mixed quantum states.
Abstract: We derive an analytical lower bound for the concurrence of a bipartite quantum state in arbitrary dimension. A functional relation is established relating concurrence, the Peres-Horodecki criterion, and the realignment criterion. We demonstrate that our bound is exact for some mixed quantum states. The significance of our method is illustrated by giving a quantitative evaluation of entanglement for many bound entangled states, some of which fail to be identified by the usual concurrence estimation method.

Posted Content
TL;DR: In this article, a lower bound on the minimal size of a quantum unitary operation is provided by the length of the minimal geodesic between U and the identity, where length is defined by a suitable Finsler metric on SU(2^n).
Abstract: What is the minimal size quantum circuit required to exactly implement a specified n-qubit unitary operation, U, without the use of ancilla qubits? We show that a lower bound on the minimal size is provided by the length of the minimal geodesic between U and the identity, I, where length is defined by a suitable Finsler metric on SU(2^n). The geodesic curves of such a metric have the striking property that once an initial position and velocity are set, the remainder of the geodesic is completely determined by a second order differential equation known as the geodesic equation. This is in contrast with the usual case in circuit design, either classical or quantum, where being given part of an optimal circuit does not obviously assist in the design of the rest of the circuit. Geodesic analysis thus offers a potentially powerful approach to the problem of proving quantum circuit lower bounds. In this paper we construct several Finsler metrics whose minimal length geodesics provide lower bounds on quantum circuit size, and give a procedure to compute the corresponding geodesic equation. We also construct a large class of solutions to the geodesic equation, which we call Pauli geodesics, since they arise from isometries generated by the Pauli group. For any unitary U diagonal in the computational basis, we show that: (a) provided the minimal length geodesic is unique, it must be a Pauli geodesic; (b) finding the length of the minimal Pauli geodesic passing from I to U is equivalent to solving an exponential size instance of the closest vector in a lattice problem (CVP); and (c) all but a doubly exponentially small fraction of such unitaries have minimal Pauli geodesics of exponential length.

Journal ArticleDOI
TL;DR: The capacity of spatially correlated Rician multiple-input multiple-output (MIMO) channels in the general case with double-sided correlation and arbitrary rank channel means is considered and tight upper and lower bounds on the ergodic capacity are derived.
Abstract: This paper considers the capacity of spatially correlated Rician multiple-input multiple-output (MIMO) channels. We consider the general case with double-sided correlation and arbitrary rank channel means. We derive tight upper and lower bounds on the ergodic capacity. In the particular cases when the numbers of transmit and receive antennas are equal, or when the correlation is single sided, we derive more specific bounds which are computationally efficient. The bounds are shown to reduce to known results in cases of independent and identically distributed (i.i.d.) and correlated Rayleigh MIMO channels. We also analyze the outage characteristics of the correlated Rician MIMO channels at high signal-to-noise ratio (SNR). We derive the mean and variance of the mutual information and show that it is well approximated by a Gaussian distribution. Finally, we present numerical results which show the effect of the antenna configuration, correlation level (angle spreads), Rician K-factor, and the geometry of the dominant Rician paths.

Journal ArticleDOI
TL;DR: This paper generalizes the "terrorist threat problem" first defined by Salmero/spl acute/n, Wood, and Baldick by formulating it as a bilevel programming problem, and converts it into an equivalent single-level mixed-integer linear program by replacing the inner optimization by its Karush-Kuhn-Tucker optimality conditions.
Abstract: This paper generalizes the "terrorist threat problem" first defined by Salmero/spl acute/n, Wood, and Baldick by formulating it as a bilevel programming problem. Specifically, the bilevel model allows one to define different objective functions for the terrorist and the system operator as well as permitting the imposition of constraints on the outer optimization that are functions of both the inner and outer variables. This degree of flexibility is not possible through existing max-min models. The bilevel formulation is investigated through a problem in which the goal of the destructive agent is to minimize the number of power system components that must be destroyed in order to cause a loss of load greater than or equal to a specified level. This goal is tempered by the logical assumption that, following a deliberate outage, the system operator will implement all feasible corrective actions to minimize the level of system load shed. The resulting nonlinear mixed-integer bilevel programming formulation is transformed into an equivalent single-level mixed-integer linear program by replacing the inner optimization by its Karush-Kuhn-Tucker optimality conditions and converting a number of nonlinearities to linear equivalents using some well-known integer algebra results. The equivalent formulation has been tested on two case studies, including the 24-bus IEEE Reliability Test System, through the use of commercially available software.

Journal ArticleDOI
TL;DR: A new upper bound on the maximum size A(n,d) of a binary code of word length n and minimum distance at least d is given, based on block-diagonalizing the Terwilliger algebra of the Hamming cube.
Abstract: We give a new upper bound on the maximum size A(n,d) of a binary code of word length n and minimum distance at least d. It is based on block-diagonalizing the Terwilliger algebra of the Hamming cube. The bound strengthens the Delsarte bound, and can be calculated with semidefinite programming in time bounded by a polynomial in n. We show that it improves a number of known upper bounds for concrete values of n and d. From this we also derive a new upper bound on the maximum size A(n,d,w) of a binary code of word length n, minimum distance at least d, and constant weight w, again strengthening the Delsarte bound and yielding several improved upper bounds for concrete values of n, d, and w

Journal ArticleDOI
TL;DR: The paper proves that theset of context and the set of properties of a concept is a complete orthocomplemented lattice, and shows that the context lattice as well as the property lattice are non‐classical, i.e. quantum‐like, lattices.
Abstract: Purpose – To elaborate a theory for modeling concepts that incorporates how a context influences the typicality of a single exemplar and the applicability of a single property of a concept. To investigate the structure of the sets of contexts and properties.Design/methodology/approach – The effect of context on the typicality of an exemplar and the applicability of a property is accounted for by introducing the notion of “state of a concept”, and making use of the state‐context‐property formalism (SCOP), a generalization of the quantum formalism, whose basic notions are states, contexts and properties.Findings – The paper proves that the set of context and the set of properties of a concept is a complete orthocomplemented lattice, i.e. a set with a partial order relation, such that for each subset there exists a greatest lower bound and a least upper bound, and such that for each element there exists an orthocomplement. This structure describes the “and”, “or”, and “not”, respectively for contexts and pro...

Journal ArticleDOI
TL;DR: An optimal QR decomposition is proposed, which is called the equal-diagonal QR decompose, or briefly the QRS decomposition, and the performance of the QR detector is asymptotically equivalent to that of the maximum-likelihood detector (MLD) that uses the same precoder.
Abstract: In multiple-input multiple-output (MIMO) multiuser detection theory, the QR decomposition of the channel matrix H can be used to form the back-cancellation detector. In this paper, we propose an optimal QR decomposition, which we call the equal-diagonal QR decomposition, or briefly the QRS decomposition. We apply the decomposition to precoded successive-cancellation detection, where we assume that both the transmitter and the receiver have perfect channel knowledge. We show that, for any channel matrix H, there exists a unitary precoder matrix S, such that HS=QR, where the nonzero diagonal entries of the upper triangular matrix R in the QR decomposition of HS are all equal to each other. The precoder and the resulting successive-cancellation detector have the following properties. a) The minimum Euclidean distance between two signal points at the channel output is equal to the minimum Euclidean distance between two constellation points at the precoder input up to a multiplicative factor that equals the diagonal entry in the R-factor. b) The superchannel HS naturally exhibits an optimally ordered column permutation, i.e., the optimal detection order for the vertical Bell Labs layered space-time (V-BLAST) detector is the natural order. c) The precoder S minimizes the block error probability of the QR successive cancellation detector. d) A lower and an upper bound for the free distance at the channel output is expressible in terms of the diagonal entries of the R-factor in the QR decomposition of a channel matrix. e) The precoder S maximizes the lower bound of the channel's free distance subject to a power constraint. f) For the optimal precoder S, the performance of the QR detector is asymptotically (at large signal-to-noise ratios (SNRs)) equivalent to that of the maximum-likelihood detector (MLD) that uses the same precoder. Further, We consider two multiplexing schemes: time-division multiple access (TDMA) and orthogonal frequency-division multiplexing (OFDM). We d