scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Theory in 2005"


Journal ArticleDOI
TL;DR: F can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program) and numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted.
Abstract: This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f/spl isin/R/sup n/ from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the /spl lscr//sub 1/-minimization problem (/spl par/x/spl par//sub /spl lscr/1/:=/spl Sigma//sub i/|x/sub i/|) min(g/spl isin/R/sup n/) /spl par/y - Ag/spl par//sub /spl lscr/1/ provided that the support of the vector of errors is not too large, /spl par/e/spl par//sub /spl lscr/0/:=|{i:e/sub i/ /spl ne/ 0}|/spl les//spl rho//spl middot/m for some /spl rho/>0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of /spl lscr//sub 1/ is a crucial property we call the uniform uncertainty principle that we shall describe in detail.

6,853 citations


Journal ArticleDOI
TL;DR: The capacity results generalize broadly, including to multiantenna transmission with Rayleigh fading, single-bounce fading, certain quasi-static fading problems, cases where partial channel knowledge is available at the transmitters, and cases where local user cooperation is permitted.
Abstract: Coding strategies that exploit node cooperation are developed for relay networks. Two basic schemes are studied: the relays decode-and-forward the source message to the destination, or they compress-and-forward their channel outputs to the destination. The decode-and-forward scheme is a variant of multihopping, but in addition to having the relays successively decode the message, the transmitters cooperate and each receiver uses several or all of its past channel output blocks to decode. For the compress-and-forward scheme, the relays take advantage of the statistical dependence between their channel outputs and the destination's channel output. The strategies are applied to wireless channels, and it is shown that decode-and-forward achieves the ergodic capacity with phase fading if phase information is available only locally, and if the relays are near the source node. The ergodic capacity coincides with the rate of a distributed antenna array with full cooperation even though the transmitting antennas are not colocated. The capacity results generalize broadly, including to multiantenna transmission with Rayleigh fading, single-bounce fading, certain quasi-static fading problems, cases where partial channel knowledge is available at the transmitters, and cases where local user cooperation is permitted. The results further extend to multisource and multidestination networks such as multiaccess and broadcast relay channels.

2,842 citations


Journal ArticleDOI
TL;DR: This work explains how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms, and describes empirical results showing that GBP can significantly outperform BP.
Abstract: Important inference problems in statistical physics, computer vision, error-correcting coding theory, and artificial intelligence can all be reformulated as the computation of marginal probabilities on factor graphs. The belief propagation (BP) algorithm is an efficient way to solve these problems that is exact when the factor graph is a tree, but only approximate when the factor graph has cycles. We show that BP fixed points correspond to the stationary points of the Bethe approximation of the free energy for a factor graph. We explain how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms. We emphasize the conditions a free energy approximation must satisfy in order to be a "valid" or "maxent-normal" approximation. We describe the relationship between four different methods that can be used to generate valid approximations: the "Bethe method", the "junction graph method", the "cluster variation method", and the "region graph method". Finally, we explain how to tell whether a region-based approximation, and its corresponding GBP algorithm, is likely to be accurate, and describe empirical results showing that GBP can significantly outperform BP.

1,827 citations


Journal ArticleDOI
TL;DR: Simulation results show that the PEG algorithm is a powerful algorithm to generate good short-block-length LDPC codes.
Abstract: We propose a general method for constructing Tanner graphs having a large girth by establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) algorithm. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting low-density parity-check (LDPC) codes are derived in terms of parameters of the graphs. Simple variations of the PEG algorithm can also be applied to generate linear-time encodeable LDPC codes. Regular and irregular LDPC codes using PEG Tanner graphs and allowing symbol nodes to take values over GF(q) (q>2) are investigated. Simulation results show that the PEG algorithm is a powerful algorithm to generate good short-block-length LDPC codes.

1,507 citations


Journal ArticleDOI
TL;DR: This work proposes a scheme that constructs M random beams and that transmits information to the users with the highest signal-to-noise-plus-interference ratios (SINRs), which can be made available to the transmitter with very little feedback.
Abstract: In multiple-antenna broadcast channels, unlike point-to-point multiple-antenna channels, the multiuser capacity depends heavily on whether the transmitter knows the channel coefficients to each user. For instance, in a Gaussian broadcast channel with M transmit antennas and n single-antenna users, the sum rate capacity scales like Mloglogn for large n if perfect channel state information (CSI) is available at the transmitter, yet only logarithmically with M if it is not. In systems with large n, obtaining full CSI from all users may not be feasible. Since lack of CSI does not lead to multiuser gains, it is therefore of interest to investigate transmission schemes that employ only partial CSI. We propose a scheme that constructs M random beams and that transmits information to the users with the highest signal-to-noise-plus-interference ratios (SINRs), which can be made available to the transmitter with very little feedback. For fixed M and n increasing, the throughput of our scheme scales as MloglognN, where N is the number of receive antennas of each user. This is precisely the same scaling obtained with perfect CSI using dirty paper coding. We furthermore show that a linear increase in throughput with M can be obtained provided that M does not not grow faster than logn. We also study the fairness of our scheduling in a heterogeneous network and show that, when M is large enough, the system becomes interference dominated and the probability of transmitting to any user converges to 1/n, irrespective of its path loss. In fact, using M=/spl alpha/logn transmit antennas emerges as a desirable operating point, both in terms of providing linear scaling of the throughput with M as well as in guaranteeing fairness.

1,450 citations


Journal ArticleDOI
TL;DR: Compared to the direct transmission and traditional multihop protocols, the results reveal that optimum relay channel signaling can significantly outperform multihip protocols, and that power allocation has a significant impact on the performance.
Abstract: We consider three-node wireless relay channels in a Rayleigh-fading environment. Assuming transmitter channel state information (CSI), we study upper bounds and lower bounds on the outage capacity and the ergodic capacity. Our studies take into account practical constraints on the transmission/reception duplexing at the relay node and on the synchronization between the source node and the relay node. We also explore power allocation. Compared to the direct transmission and traditional multihop protocols, our results reveal that optimum relay channel signaling can significantly outperform multihop protocols, and that power allocation has a significant impact on the performance.

1,208 citations


Journal ArticleDOI
TL;DR: In this paper, Zheng-Tse diversity-multiplexing tradeoff for delay-limited coherent fading channels consisting of N (half-duplex and single-antenna) partners and one cell site is investigated.
Abstract: We propose novel cooperative transmission protocols for delay-limited coherent fading channels consisting of N (half-duplex and single-antenna) partners and one cell site In our work, we differentiate between the relay, cooperative broadcast (down-link), and cooperative multiple-access (CMA) (up-link) channels The proposed protocols are evaluated using Zheng-Tse diversity-multiplexing tradeoff For the relay channel, we investigate two classes of cooperation schemes; namely, amplify and forward (AF) protocols and decode and forward (DF) protocols For the first class, we establish an upper bound on the achievable diversity-multiplexing tradeoff with a single relay We then construct a new AF protocol that achieves this upper bound The proposed algorithm is then extended to the general case with (N-1) relays where it is shown to outperform the space-time coded protocol of Laneman and Wornell without requiring decoding/encoding at the relays For the class of DF protocols, we develop a dynamic decode and forward (DDF) protocol that achieves the optimal tradeoff for multiplexing gains 0lesrles1/N Furthermore, with a single relay, the DDF protocol is shown to dominate the class of AF protocols for all multiplexing gains The superiority of the DDF protocol is shown to be more significant in the cooperative broadcast channel The situation is reversed in the CMA channel where we propose a new AF protocol that achieves the optimal tradeoff for all multiplexing gains A distinguishing feature of the proposed protocols in the three scenarios is that they do not rely on orthogonal subspaces, allowing for a more efficient use of resources In fact, using our results one can argue that the suboptimality of previously proposed protocols stems from their use of orthogonal subspaces rather than the half-duplex constraint

1,189 citations


Journal ArticleDOI
TL;DR: In this article, the authors showed that the mutual information with respect to the signal-to-noise ratio (SNR) is equal to half the MMSE, regardless of the input statistics.
Abstract: This paper deals with arbitrarily distributed finite-power input signals observed through an additive Gaussian noise channel. It shows a new formula that connects the input-output mutual information and the minimum mean-square error (MMSE) achievable by optimal estimation of the input given the output. That is, the derivative of the mutual information (nats) with respect to the signal-to-noise ratio (SNR) is equal to half the MMSE, regardless of the input statistics. This relationship holds for both scalar and vector signals, as well as for discrete-time and continuous-time noncausal MMSE estimation. This fundamental information-theoretic result has an unexpected consequence in continuous-time nonlinear estimation: For any input signal with finite power, the causal filtering MMSE achieved at SNR is equal to the average value of the noncausal smoothing MMSE achieved with a channel whose SNR is chosen uniformly distributed between 0 and SNR.

1,129 citations


Journal ArticleDOI
TL;DR: Deterministic polynomial time algorithms and even faster randomized algorithms for designing linear codes for directed acyclic graphs with edges of unit capacity are given and extended to integer capacities and to codes that are tolerant to edge failures.
Abstract: The famous max-flow min-cut theorem states that a source node s can send information through a network (V, E) to a sink node t at a rate determined by the min-cut separating s and t. Recently, it has been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to re-encode the information they receive. We demonstrate examples of networks where the achievable rates obtained by coding at intermediate nodes are arbitrarily larger than if coding is not allowed. We give deterministic polynomial time algorithms and even faster randomized algorithms for designing linear codes for directed acyclic graphs with edges of unit capacity. We extend these algorithms to integer capacities and to codes that are tolerant to edge failures.

1,046 citations


Journal ArticleDOI
TL;DR: This correspondence proposes a quantized precoding system where the optimal precoder is chosen from a finite codebook known to both receiver and transmitter and performs close to optimal unitary precoding with a minimal amount of feedback.
Abstract: Multiple-input multiple-output (MIMO) wireless systems use antenna arrays at both the transmitter and receiver to provide communication links with substantial diversity and capacity. Spatial multiplexing is a common space-time modulation technique for MIMO communication systems where independent information streams are sent over different transmit antennas. Unfortunately, spatial multiplexing is sensitive to ill-conditioning of the channel matrix. Precoding can improve the resilience of spatial multiplexing at the expense of full channel knowledge at the transmitter-which is often not realistic. This correspondence proposes a quantized precoding system where the optimal precoder is chosen from a finite codebook known to both receiver and transmitter. The index of the optimal precoder is conveyed from the receiver to the transmitter over a low-delay feedback link. Criteria are presented for selecting the optimal precoding matrix based on the error rate and mutual information for different receiver designs. Codebook design criteria are proposed for each selection criterion by minimizing a bound on the average distortion assuming a Rayleigh-fading matrix channel. The design criteria are shown to be equivalent to packing subspaces in the Grassmann manifold using the projection two-norm and Fubini-Study distances. Simulation results show that the proposed system outperforms antenna subset selection and performs close to optimal unitary precoding with a minimal amount of feedback.

943 citations


Journal ArticleDOI
TL;DR: It is somewhat surprising that the upper bound can meet the lower bound under certain regularity conditions (not necessarily degradedness), and therefore the capacity can be characterized exactly; previously this has been proven only for the degraded Gaussian relay channel.
Abstract: We study the capacity of multiple-input multiple- output (MIMO) relay channels. We first consider the Gaussian MIMO relay channel with fixed channel conditions, and derive upper bounds and lower bounds that can be obtained numerically by convex programming. We present algorithms to compute the bounds. Next, we generalize the study to the Rayleigh fading case. We find an upper bound and a lower bound on the ergodic capacity. It is somewhat surprising that the upper bound can meet the lower bound under certain regularity conditions (not necessarily degradedness), and therefore the capacity can be characterized exactly; previously this has been proven only for the degraded Gaussian relay channel. We investigate sufficient conditions for achieving the ergodic capacity; and in particular, for the case where all nodes have the same number of antennas, the capacity can be achieved under certain signal-to-noise ratio (SNR) conditions. Numerical results are also provided to illustrate the bounds on the ergodic capacity of the MIMO relay channel over Rayleigh fading. Finally, we present a potential application of the MIMO relay channel for cooperative communications in ad hoc networks.

Journal ArticleDOI
Igor Devetak1
TL;DR: In this paper, the capacity of a quantum channel for transmitting private classical information is derived, which is shown to be equal to the capacity for generating a secret key, and neither capacity is enhanced by forward public classical communication.
Abstract: A formula for the capacity of a quantum channel for transmitting private classical information is derived. This is shown to be equal to the capacity of the channel for generating a secret key, and neither capacity is enhanced by forward public classical communication. Motivated by the work of Schumacher and Westmoreland on quantum privacy and quantum coherence, parallels between private classical information and quantum information are exploited to obtain an expression for the capacity of a quantum channel for generating pure bipartite entanglement. The latter implies a new proof of the quantum channel coding theorem and a simple proof of the converse. The coherent information plays a role in all of the above mentioned capacities

Journal ArticleDOI
TL;DR: In this article, the Golden code for a 2/spl times/2 multiple-input multiple-output (MIMO) system is presented, where the Golden number 1+/spl radic/5/2 is used.
Abstract: In this paper, the Golden code for a 2/spl times/2 multiple-input multiple-output (MIMO) system is presented. This is a full-rate 2/spl times/2 linear dispersion algebraic space-time code with unprecedented performance based on the Golden number 1+/spl radic/5/2.

Journal ArticleDOI
TL;DR: This work develops and analyze methods for computing provably optimal maximum a posteriori probability (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles and establishes a connection between a certain LP relaxation of the mode-finding problem and a reweighted form of the max-product (min-sum) message-passing algorithm.
Abstract: We develop and analyze methods for computing provably optimal maximum a posteriori probability (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles. By decomposing the original distribution into a convex combination of tree-structured distributions, we obtain an upper bound on the optimal value of the original problem (i.e., the log probability of the MAP assignment) in terms of the combined optimal values of the tree problems. We prove that this upper bound is tight if and only if all the tree distributions share an optimal configuration in common. An important implication is that any such shared configuration must also be a MAP configuration for the original distribution. Next we develop two approaches to attempting to obtain tight upper bounds: a) a tree-relaxed linear program (LP), which is derived from the Lagrangian dual of the upper bounds; and b) a tree-reweighted max-product message-passing algorithm that is related to but distinct from the max-product algorithm. In this way, we establish a connection between a certain LP relaxation of the mode-finding problem and a reweighted form of the max-product (min-sum) message-passing algorithm.

Journal ArticleDOI
TL;DR: The density function of the distance to the n-nearest neighbor of a homogeneous process in Ropfm is shown to be governed by a generalized Gamma distribution, which has many implications for large wireless networks of randomly distributed nodes.
Abstract: The distribution of Euclidean distances in Poisson point processes is determined. The main result is the density function of the distance to the n-nearest neighbor of a homogeneous process in Ropfm, which is shown to be governed by a generalized Gamma distribution. The result has many implications for large wireless networks of randomly distributed nodes

Journal ArticleDOI
TL;DR: The definition of a pseudocodeword unifies other such notions known for iterative algorithms, including "stopping sets," "irreducible closed walks," "trellis cycles," "deviation sets," and "graph covers," which is a lower bound on the classical distance.
Abstract: A new method is given for performing approximate maximum-likelihood (ML) decoding of an arbitrary binary linear code based on observations received from any discrete memoryless symmetric channel. The decoding algorithm is based on a linear programming (LP) relaxation that is defined by a factor graph or parity-check representation of the code. The resulting "LP decoder" generalizes our previous work on turbo-like codes. A precise combinatorial characterization of when the LP decoder succeeds is provided, based on pseudocodewords associated with the factor graph. Our definition of a pseudocodeword unifies other such notions known for iterative algorithms, including "stopping sets," "irreducible closed walks," "trellis cycles," "deviation sets," and "graph covers." The fractional distance d/sub frac/ of a code is introduced, which is a lower bound on the classical distance. It is shown that the efficient LP decoder will correct up to /spl lceil/d/sub frac//2/spl rceil/-1 errors and that there are codes with d/sub frac/=/spl Omega/(n/sup 1-/spl epsi//). An efficient algorithm to compute the fractional distance is presented. Experimental evidence shows a similar performance on low-density parity-check (LDPC) codes between LP decoding and the min-sum and sum-product algorithms. Methods for tightening the LP relaxation to improve performance are also provided.

Journal ArticleDOI
TL;DR: Upper and lower bounds on the transmission capacity of spread-spectrum (SS) wireless ad hoc networks are derived and it can be shown that FH-CDMA obtains a higher transmission capacity on the order of M/sup 1-2//spl alpha//, where M is the spreading factor and /spl alpha/>2 is the path loss exponent.
Abstract: In this paper, upper and lower bounds on the transmission capacity of spread-spectrum (SS) wireless ad hoc networks are derived. We define transmission capacity as the product of the maximum density of successful transmissions multiplied by their data rate, given an outage constraint. Assuming that the nodes are randomly distributed in space according to a Poisson point process, we derive upper and lower bounds for frequency hopping (FH-CDMA) and direct sequence (DS-CDMA) SS networks, which incorporate traditional modulation types (no spreading) as a special case. These bounds cleanly summarize how ad hoc network capacity is affected by the outage probability, spreading factor, transmission power, target signal-to-noise ratio (SNR), and other system parameters. Using these bounds, it can be shown that FH-CDMA obtains a higher transmission capacity than DS-CDMA on the order of M/sup 1-2//spl alpha//, where M is the spreading factor and /spl alpha/>2 is the path loss exponent. A tangential contribution is an (apparently) novel technique for obtaining tight bounds on tail probabilities of additive functionals of homogeneous Poisson point processes.

Journal ArticleDOI
TL;DR: The capacity-achieving algorithm is a modified version of the Grossglauser-Tse two-hop relay algorithm and provides O(N) delay, and it is shown that redundancy cannot increase capacity, but can significantly improve delay.
Abstract: We consider the throughput/delay tradeoffs for scheduling data transmissions in a mobile ad hoc network. To reduce delays in the network, each user sends redundant packets along multiple paths to the destination. Assuming the network has a cell partitioned structure and users move according to a simplified independent and identically distributed (i.i.d.) mobility model, we compute the exact network capacity and the exact end-to-end queueing delay when no redundancy is used. The capacity-achieving algorithm is a modified version of the Grossglauser-Tse two-hop relay algorithm and provides O(N) delay (where N is the number of users). We then show that redundancy cannot increase capacity, but can significantly improve delay. The following necessary tradeoff is established: delay/rate/spl ges/O(N). Two protocols that use redundancy and operate near the boundary of this curve are developed, with delays of O(/spl radic/N) and O(log(N)), respectively. Networks with non-i.i.d. mobility are also considered and shown through simulation to closely match the performance of i.i.d. systems in the O(/spl radic/N) delay regime.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of maximizing sum rate of a multiple-antenna Gaussian broadcast channel (BC) with dirty-paper coding and derived simple and fast iterative algorithms that provide the optimum transmission policies for the MAC, which can easily be mapped to the optimal BC policies.
Abstract: In this correspondence, we consider the problem of maximizing sum rate of a multiple-antenna Gaussian broadcast channel (BC). It was recently found that dirty-paper coding is capacity achieving for this channel. In order to achieve capacity, the optimal transmission policy (i.e., the optimal transmit covariance structure) given the channel conditions and power constraint must be found. However, obtaining the optimal transmission policy when employing dirty-paper coding is a computationally complex nonconvex problem. We use duality to transform this problem into a well-structured convex multiple-access channel (MAC) problem. We exploit the structure of this problem and derive simple and fast iterative algorithms that provide the optimum transmission policies for the MAC, which can easily be mapped to the optimal BC policies.

Journal ArticleDOI
TL;DR: The results provide an information-theoretic framework for the study of common communication problems such as precoding for intersymbol interference (ISI) channels and broadcast channels.
Abstract: We consider the generalized dirty-paper channel Y=X+S+N,E{X/sup 2/}/spl les/P/sub X/, where N is not necessarily Gaussian, and the interference S is known causally or noncausally to the transmitter. We derive worst case capacity formulas and strategies for "strong" or arbitrarily varying interference. In the causal side information (SI) case, we develop a capacity formula based on minimum noise entropy strategies. We then show that strategies associated with entropy-constrained quantizers provide lower and upper bounds on the capacity. At high signal-to-noise ratio (SNR) conditions, i.e., if N is weak relative to the power constraint P/sub X/, these bounds coincide, the optimum strategies take the form of scalar lattice quantizers, and the capacity loss due to not having S at the receiver is shown to be exactly the "shaping gain" 1/2log(2/spl pi/e/12)/spl ap/ 0.254 bit. We extend the schemes to obtain achievable rates at any SNR and to noncausal SI, by incorporating minimum mean-squared error (MMSE) scaling, and by using k-dimensional lattices. For Gaussian N, the capacity loss of this scheme is upper-bounded by 1/2log2/spl pi/eG(/spl Lambda/), where G(/spl Lambda/) is the normalized second moment of the lattice. With a proper choice of lattice, the loss goes to zero as the dimension k goes to infinity, in agreement with the results of Costa. These results provide an information-theoretic framework for the study of common communication problems such as precoding for intersymbol interference (ISI) channels and broadcast channels.

Journal ArticleDOI
TL;DR: A new class of upper bounds on the log partition function of a Markov random field (MRF) is introduced, based on concepts from convex duality and information geometry, and the Legendre mapping between exponential and mean parameters is exploited.
Abstract: We introduce a new class of upper bounds on the log partition function of a Markov random field (MRF). This quantity plays an important role in various contexts, including approximating marginal distributions, parameter estimation, combinatorial enumeration, statistical decision theory, and large-deviations bounds. Our derivation is based on concepts from convex duality and information geometry: in particular, it exploits mixtures of distributions in the exponential domain, and the Legendre mapping between exponential and mean parameters. In the special case of convex combinations of tree-structured distributions, we obtain a family of variational problems, similar to the Bethe variational problem, but distinguished by the following desirable properties: i) they are convex, and have a unique global optimum; and ii) the optimum gives an upper bound on the log partition function. This optimum is defined by stationary conditions very similar to those defining fixed points of the sum-product algorithm, or more generally, any local optimum of the Bethe variational problem. As with sum-product fixed points, the elements of the optimizing argument can be used as approximations to the marginals of the original model. The analysis extends naturally to convex combinations of hypertree-structured distributions, thereby establishing links to Kikuchi approximations and variants.

Journal ArticleDOI
TL;DR: This paper proposes an alternating projection method that is versatile enough to solve a huge class of inverse eigenvalue problems (IEPs), which includes the frame design problem, and addresses the most basic design problem: constructing tight frames with prescribed vector norms.
Abstract: Tight frames, also known as general Welch-bound- equality sequences, generalize orthonormal systems. Numerous applications - including communications, coding, and sparse approximation- require finite-dimensional tight frames that possess additional structural properties. This paper proposes an alternating projection method that is versatile enough to solve a huge class of inverse eigenvalue problems (IEPs), which includes the frame design problem. To apply this method, one needs only to solve a matrix nearness problem that arises naturally from the design specifications. Therefore, it is the fast and easy to develop versions of the algorithm that target new design problems. Alternating projection will often succeed even if algebraic constructions are unavailable. To demonstrate that alternating projection is an effective tool for frame design, the paper studies some important structural properties in detail. First, it addresses the most basic design problem: constructing tight frames with prescribed vector norms. Then, it discusses equiangular tight frames, which are natural dictionaries for sparse approximation. Finally, it examines tight frames whose individual vectors have low peak-to-average-power ratio (PAR), which is a valuable property for code-division multiple-access (CDMA) applications. Numerical experiments show that the proposed algorithm succeeds in each of these three cases. The appendices investigate the convergence properties of the algorithm.

Journal ArticleDOI
TL;DR: This paper applies random matrix theory to obtain analytical characterizations of the capacity of correlated multiantenna channels that uncover compact capacity expansions that are valid for arbitrary numbers of antennas and that shed insight on how antenna correlation impacts the tradeoffs among power, bandwidth, and rate.
Abstract: This paper applies random matrix theory to obtain analytical characterizations of the capacity of correlated multiantenna channels. The analysis is not restricted to the popular separable correlation model, but rather it embraces a more general representation that subsumes most of the channel models that have been treated in the literature. For arbitrary signal-to-noise ratios (SNR), the characterization is conducted in the regime of large numbers of antennas. For the low- and high-SNR regions, in turn, we uncover compact capacity expansions that are valid for arbitrary numbers of antennas and that shed insight on how antenna correlation impacts the tradeoffs among power, bandwidth, and rate.

Journal ArticleDOI
TL;DR: In this paper, the authors define an ensemble of lattices, and show that for asymptotically high dimension most of its members are simultaneously good as sphere packings, sphere coverings, additive white Gaussian noise (AWGN) channel codes and mean-squared error quantization codes.
Abstract: We define an ensemble of lattices, and show that for asymptotically high dimension most of its members are simultaneously good as sphere packings, sphere coverings, additive white Gaussian noise (AWGN) channel codes and mean-squared error (MSE) quantization codes. These lattices are generated by applying Construction A to a random linear code over a prime field of growing size, i.e., by "lifting" the code to /spl Ropf//sup n/.

Journal ArticleDOI
TL;DR: This paper shows that every efficient algorithm for the smallest grammar problem has approximation ratio at least 8569/8568 unless P=NP, and bound approximation ratios for several of the best known grammar-based compression algorithms, including LZ78, B ISECTION, SEQUENTIAL, LONGEST MATCH, GREEDY, and RE-PAIR.
Abstract: This paper addresses the smallest grammar problem: What is the smallest context-free grammar that generates exactly one given string /spl sigma/? This is a natural question about a fundamental object connected to many fields such as data compression, Kolmogorov complexity, pattern identification, and addition chains. Due to the problem's inherent complexity, our objective is to find an approximation algorithm which finds a small grammar for the input string. We focus attention on the approximation ratio of the algorithm (and implicitly, the worst case behavior) to establish provable performance guarantees and to address shortcomings in the classical measure of redundancy in the literature. Our first results are concern the hardness of approximating the smallest grammar problem. Most notably, we show that every efficient algorithm for the smallest grammar problem has approximation ratio at least 8569/8568 unless P=NP. We then bound approximation ratios for several of the best known grammar-based compression algorithms, including LZ78, B ISECTION, SEQUENTIAL, LONGEST MATCH, GREEDY, and RE-PAIR. Among these, the best upper bound we show is O(n/sup 1/2/). We finish by presenting two novel algorithms with exponentially better ratios of O(log/sup 3/n) and O(log(n/m/sup */)), where m/sup */ is the size of the smallest grammar for that input. The latter algorithm highlights a connection between grammar-based compression and LZ77.

Journal ArticleDOI
TL;DR: This work constructs analytically optimal codebooks meeting the Welch lower bound, and develops an efficient numerical search method based on a generalized Lloyd algorithm that leads to considerable improvement on the achieved I/sub max/ over existing alternatives.
Abstract: Consider a codebook containing N unit-norm complex vectors in a K-dimensional space. In a number of applications, the codebook that minimizes the maximal cross-correlation amplitude (I/sub max/) is often desirable. Relying on tools from combinatorial number theory, we construct analytically optimal codebooks meeting, in certain cases, the Welch lower bound. When analytical constructions are not available, we develop an efficient numerical search method based on a generalized Lloyd algorithm, which leads to considerable improvement on the achieved I/sub max/ over existing alternatives. We also derive a composite lower bound on the minimum achievable I/sub max/ that is effective for any codebook size N.

Journal ArticleDOI
TL;DR: This paper studies randomly spread code-division multiple access (CDMA) and multiuser detection in the large-system limit using the replica method developed in statistical physics and based on a general linear vector channel model.
Abstract: This paper studies randomly spread code-division multiple access (CDMA) and multiuser detection in the large-system limit using the replica method developed in statistical physics. Arbitrary input distributions and flat fading are considered. A generic multiuser detector in the form of the posterior mean estimator is applied before single-user decoding. The generic detector can be particularized to the matched filter, decorrelator, linear minimum mean-square error (MMSE) detector, the jointly or the individually optimal detector, and others. It is found that the detection output for each user, although in general asymptotically non-Gaussian conditioned on the transmitted symbol, converges as the number of users go to infinity to a deterministic function of a "hidden" Gaussian statistic independent of the interferers. Thus, the multiuser channel can be decoupled: Each user experiences an equivalent single-user Gaussian channel, whose signal-to-noise ratio (SNR) suffers a degradation due to the multiple-access interference (MAI). The uncoded error performance (e.g., symbol error rate) and the mutual information can then be fully characterized using the degradation factor, also known as the multiuser efficiency, which can be obtained by solving a pair of coupled fixed-point equations identified in this paper. Based on a general linear vector channel model, the results are also applicable to multiple-input multiple-output (MIMO) channels such as in multiantenna systems.

Journal ArticleDOI
TL;DR: It is shown that the network coding capacity of this counterexample network is strictly greater than the maximum linear coding capacity over any finite field, so the network is not even asymptotically linearly solvable.
Abstract: It is known that every solvable multicast network has a scalar linear solution over a sufficiently large finite-field alphabet. It is also known that this result does not generalize to arbitrary networks. There are several examples in the literature of solvable networks with no scalar linear solution over any finite field. However, each example has a linear solution for some vector dimension greater than one. It has been conjectured that every solvable network has a linear solution over some finite-field alphabet and some vector dimension. We provide a counterexample to this conjecture. We also show that if a network has no linear solution over any finite field, then it has no linear solution over any finite commutative ring with identity. Our counterexample network has no linear solution even in the more general algebraic context of modules, which includes as special cases all finite rings and Abelian groups. Furthermore, we show that the network coding capacity of this network is strictly greater than the maximum linear coding capacity over any finite field (exactly 10% greater), so the network is not even asymptotically linearly solvable. It follows that, even for more general versions of linearity such as convolutional coding, filter-bank coding, or linear time sharing, the network has no linear solution.

Journal ArticleDOI
TL;DR: This paper provides a new example where a simple cut-set upper bound is achievable, and one more example where uncoded transmission achieves optimal performance in a network joint source-channel coding problem.
Abstract: The capacity of a particular large Gaussian relay network is determined in the limit as the number of relays tends to infinity. Upper bounds are derived from cut-set arguments, and lower bounds follow from an argument involving uncoded transmission. It is shown that in cases of interest, upper and lower bounds coincide in the limit as the number of relays tends to infinity. Hence, this paper provides a new example where a simple cut-set upper bound is achievable, and one more example where uncoded transmission achieves optimal performance. The findings are illustrated by geometric interpretations. The techniques developed in this paper are then applied to a sensor network situation. This is a network joint source-channel coding problem, and it is well known that the source-channel separation theorem does not extend to this case. The present paper extends this insight by providing an example where separating source from channel coding does not only lead to suboptimal performance-it leads to an exponential penalty in performance scaling behavior (as a function of the number of nodes). Finally, the techniques developed in this paper are extended to include certain models of ad hoc wireless networks, where a capacity scaling law can be established: When all nodes act purely as relays for a single source-destination pair, capacity grows with the logarithm of the number of nodes.

Journal ArticleDOI
TL;DR: The commonly used statistical multiple-input multiple-output (MIMO) model is inadequate and antenna theory is applied to take into account the area and geometry constraints, and to define the spatial signal space so to interpret experimental channel measurements in an array-independent but manageable description of the physical environment.
Abstract: Multiple-antenna systems that are limited by the area and geometry of antenna arrays, are considered. Given these physical constraints, the limit on the available number of spatial degrees of freedom is derived. The commonly used statistical multiple-input multiple-output (MIMO) model is inadequate. Antenna theory is applied to take into account the area and geometry constraints, and to define the spatial signal space so as to interpret experimental channel measurements in an array-independent but manageable description of the physical environment. Based on these modeling strategies, for a spherical array of effective aperture A in a physical environment of angular spread |/spl Omega/| in solid angle, the number of spatial degrees of freedom is shown to be A|/spl Omega/| for uni-polarized antennas and 2A|/spl Omega/| for tri-polarized antennas. Together with the 2WT degrees of freedom for a system of bandwidth W transmitting in an interval T, the total degrees of freedom of a multiple-antenna channel is therefore 4WTA|/spl Omega/|.