scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Theory in 2006"


Journal ArticleDOI
TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Abstract: This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.

14,587 citations


Journal ArticleDOI
TL;DR: If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.
Abstract: Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0

6,342 citations


Journal ArticleDOI
TL;DR: This work presents a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks, and shows that this approach can take advantage of redundant network capacity for improved success probability and robustness.
Abstract: We present a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks. Network nodes independently and randomly select linear mappings from inputs onto output links over some field. We show that this achieves capacity with probability exponentially approaching 1 with the code length. We also demonstrate that random linear coding performs compression when necessary in a network, generalizing error exponents for linear Slepian-Wolf coding in a natural way. Benefits of this approach are decentralized operation and robustness to network changes or link failures. We show that this approach can take advantage of redundant network capacity for improved success probability and robustness. We illustrate some potential advantages of random linear network coding over routing in two examples of practical scenarios: distributed network operation and networks with dynamically varying connections. Our derivation of these results also yields a new bound on required field size for centralized network coding on general multicast networks

2,806 citations


Journal ArticleDOI
TL;DR: This work analyzes the averaging problem under the gossip constraint for an arbitrary network graph, and finds that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm.
Abstract: Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of "gossip" algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the so-called Preferential Connectivity (PC) model.

2,634 citations


Journal ArticleDOI
TL;DR: A new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequality, to show that a superposition of Gaussian codes is optimal for the degraded vector broadcast channel and that DPC is ideal for the nondegraded case.
Abstract: The Gaussian multiple-input multiple-output (MIMO) broadcast channel (BC) is considered. The dirty-paper coding (DPC) rate region is shown to coincide with the capacity region. To that end, a new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequality, to show that a superposition of Gaussian codes is optimal for the degraded vector broadcast channel and that DPC is optimal for the nondegraded case. Furthermore, the capacity region is characterized under a wide range of input constraints, accounting, as special cases, for the total power and the per-antenna power constraints

1,899 citations


Journal ArticleDOI
TL;DR: A key finding is that the feedback rate per mobile must be increased linearly with the signal-to-noise ratio (SNR) (in decibels) in order to achieve the full multiplexing gain.
Abstract: Multiple transmit antennas in a downlink channel can provide tremendous capacity (i.e., multiplexing) gains, even when receivers have only single antennas. However, receiver and transmitter channel state information is generally required. In this correspondence, a system where each receiver has perfect channel knowledge, but the transmitter only receives quantized information regarding the channel instantiation is analyzed. The well-known zero-forcing transmission technique is considered, and simple expressions for the throughput degradation due to finite-rate feedback are derived. A key finding is that the feedback rate per mobile must be increased linearly with the signal-to-noise ratio (SNR) (in decibels) in order to achieve the full multiplexing gain. This is in sharp contrast to point-to-point multiple-input multiple-output (MIMO) systems, in which it is not necessary to increase the feedback rate as a function of the SNR

1,717 citations


Journal ArticleDOI
TL;DR: A method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program, which can be completed in polynomial time with standard scientific software.
Abstract: This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently solve the problem. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure that convex relaxation succeeds. As evidence of the broad impact of these results, the paper describes how convex relaxation can be used for several concrete signal recovery problems. It also describes applications to channel coding, linear regression, and numerical analysis

1,536 citations


Journal ArticleDOI
TL;DR: An achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel is developed, which resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel.
Abstract: Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.

1,157 citations


Journal ArticleDOI
TL;DR: Lower and upper bounds of mutual information under channel estimation error and tight lower bounds of ergodic and outage capacities and optimal transmitter power allocation strategies that achieve the bounds under perfect feedback are studied.
Abstract: In this correspondence, we investigate the effect of channel estimation error on the capacity of multiple-input-multiple-output (MIMO) fading channels. We study lower and upper bounds of mutual information under channel estimation error, and show that the two bounds are tight for Gaussian inputs. Assuming Gaussian inputs we also derive tight lower bounds of ergodic and outage capacities and optimal transmitter power allocation strategies that achieve the bounds under perfect feedback. For the ergodic capacity, the optimal strategy is a modified waterfilling over the spatial (antenna) and temporal (fading) domains. This strategy is close to optimum under small feedback delays, but when the delay is large, equal powers should be allocated across spatial dimensions. For the outage capacity, the optimal scheme is a spatial waterfilling and temporal truncated channel inversion. Numerical results show that some capacity gain is obtained by spatial power allocation. Temporal power adaptation, on the other hand, gives negligible gain in terms of ergodic capacity, but greatly enhances outage performance.

812 citations


Journal ArticleDOI
TL;DR: An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed and it can be implemented in a decentralized way provided some local geographic information is available to the mobiles.
Abstract: An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed. This access scheme is designed for the multihop context, where it is important to find a compromise between the spatial density of communications and the range of each transmission. More precisely, the analysis aims at optimizing the product of the number of simultaneously successful transmissions per unit of space (spatial reuse) by the average range of each transmission. The optimization is obtained via an averaging over all Poisson configurations for the location of interfering mobiles, where an exact evaluation of signal over noise ratio is possible. The main mathematical tools stem from stochastic geometry and are spatial versions of the so-called additive and max shot noise processes. The resulting medium access control (MAC) protocol exhibits some interesting properties. First, it can be implemented in a decentralized way provided some local geographic information is available to the mobiles. In addition, its transport capacity is proportional to the square root of the density of mobiles which is the upper bound of Gupta and Kumar. Finally, this protocol is self-adapting to the node density and it does not require prior knowledge of this density.

800 citations


Journal ArticleDOI
TL;DR: A practical iterative algorithm for signal reconstruction is proposed, and potential applications to coding, analog-digital (A/D) conversion, and remote wireless sensing are discussed.
Abstract: Recent results show that a relatively small number of random projections of a signal can contain most of its salient information. It follows that if a signal is compressible in some orthonormal basis, then a very accurate reconstruction can be obtained from random projections. This "compressive sampling" approach is extended here to show that signals can be accurately recovered from random projections contaminated with noise. A practical iterative algorithm for signal reconstruction is proposed, and potential applications to coding, analog-digital (A/D) conversion, and remote wireless sensing are discussed

Journal ArticleDOI
TL;DR: The paper deals with the f-divergences of Csiszar generalizing the discrimination information of Kullback, the total variation distance, the Hellinger divergence, and the Pearson divergence, where basic properties of f-Divergence including relations to the decision errors are proved in a new manner replacing the classical Jensen inequality.
Abstract: The paper deals with the f-divergences of Csiszar generalizing the discrimination information of Kullback, the total variation distance, the Hellinger divergence, and the Pearson divergence. All basic properties of f-divergences including relations to the decision errors are proved in a new manner replacing the classical Jensen inequality by a new generalized Taylor expansion of convex functions. Some new properties are proved too, e.g., relations to the statistical sufficiency and deficiency. The generalized Taylor expansion also shows very easily that all f-divergences are average statistical informations (differences between prior and posterior Bayes errors) mutually differing only in the weights imposed on various prior distributions. The statistical information introduced by De Groot and the classical information of Shannon are shown to be extremal cases corresponding to alpha=0 and alpha=1 in the class of the so-called Arimoto alpha-informations introduced in this paper for 0

Journal ArticleDOI
TL;DR: The information rate of finite-state source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum-product recursion on the joint source/ channel trellis.
Abstract: The information rate of finite-state source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum-product recursion on the joint source/channel trellis. This method is extended to compute upper and lower bounds on the information rate of very general channels with memory by means of finite-state approximations. Further upper and lower bounds can be computed by reduced-state methods

Journal ArticleDOI
TL;DR: A dynamic control strategy for minimizing energy expenditure in a time-varying wireless network with adaptive transmission rates and a similar algorithm that solves the related problem of maximizing network throughput subject to peak and average power constraints are developed.
Abstract: We develop a dynamic control strategy for minimizing energy expenditure in a time-varying wireless network with adaptive transmission rates. The algorithm operates without knowledge of traffic rates or channel statistics, and yields average power that is arbitrarily close to the minimum possible value achieved by an algorithm optimized with complete knowledge of future events. Proximity to this optimal solution is shown to be inversely proportional to network delay. We then present a similar algorithm that solves the related problem of maximizing network throughput subject to peak and average power constraints. The techniques used in this paper are novel and establish a foundation for stochastic network optimization

Journal ArticleDOI
TL;DR: This paper gives the power allocation policy that maximizes the mutual information over parallel channels with arbitrary input distributions, and admits a graphical interpretation, referred to as mercury/waterfilling, which generalizes the waterfilling solution and allows retaining some of its intuition.
Abstract: The mutual information of independent parallel Gaussian-noise channels is maximized, under an average power constraint, by independent Gaussian inputs whose power is allocated according to the waterfilling policy. In practice, discrete signaling constellations with limited peak-to-average ratios (m-PSK, m-QAM, etc.) are used in lieu of the ideal Gaussian signals. This paper gives the power allocation policy that maximizes the mutual information over parallel channels with arbitrary input distributions. Such policy admits a graphical interpretation, referred to as mercury/waterfilling, which generalizes the waterfilling solution and allows retaining some of its intuition. The relationship between mutual information of Gaussian channels and nonlinear minimum mean-square error (MMSE) proves key to solving the power allocation problem.

Journal ArticleDOI
TL;DR: The basic theory of stabilizer codes over finite fields is described and a Galois theory for these objects is introduced, which generalizes the well-known notion of additive codes over F4 of the binary case.
Abstract: One formidable difficulty in quantum communication and computation is to protect information-carrying quantum states against undesired interactions with the environment. To address this difficulty, many good quantum error-correcting codes have been derived as binary stabilizer codes. Fault-tolerant quantum computation prompted the study of nonbinary quantum codes, but the theory of such codes is not as advanced as that of binary quantum codes. This paper describes the basic theory of stabilizer codes over finite fields. The relation between stabilizer codes and general quantum codes is clarified by introducing a Galois theory for these objects. A characterization of nonbinary stabilizer codes over Fq in terms of classical codes over Fq 2 is provided that generalizes the well-known notion of additive codes over F4 of the binary case. This paper also derives lower and upper bounds on the minimum distance of stabilizer codes, gives several code constructions, and derives numerous families of stabilizer codes, including quantum Hamming codes, quadratic residue codes, quantum Melas codes, quantum Bose-Chaudhuri-Hocquenghem (BCH) codes, and quantum character codes. The puncturing theory by Rains is generalized to additive codes that are not necessarily pure. Bounds on the maximal length of maximum distance separable stabilizer codes are given. A discussion of open problems concludes this paper

Journal ArticleDOI
TL;DR: The notion of perfect space-time block codes (STBCs) are introduced and algebraic constructions of perfect STBCs for 2, 3, 4, and 6 antennas are presented.
Abstract: In this paper, we introduce the notion of perfect space-time block codes (STBCs). These codes have full-rate, full-diversity, nonvanishing constant minimum determinant for increasing spectral efficiency, uniform average transmitted energy per antenna and good shaping. We present algebraic constructions of perfect STBCs for 2, 3, 4, and 6 antennas

Journal ArticleDOI
TL;DR: In this paper, the authors simplify and extend the Eta pairing, originally discovered in the setting of supersingular curves by Barreto, to ordinary curves and obtain a speedup of a factor of around six over the usual Tate pairing, in the case of curves that have large security parameters.
Abstract: In this paper, we simplify and extend the Eta pairing, originally discovered in the setting of supersingular curves by Barreto , to ordinary curves. Furthermore, we show that by swapping the arguments of the Eta pairing, one obtains a very efficient algorithm resulting in a speed-up of a factor of around six over the usual Tate pairing, in the case of curves that have large security parameters, complex multiplication by an order of Qopf (radic-3), and when the trace of Frobenius is chosen to be suitably small. Other, more minor savings are obtained for more general curves

Journal ArticleDOI
TL;DR: The capacity under the assumption that erasure locations on all the links of the network are provided to the destinations is obtained and it turns out that the capacity region has a nice max-flow min-cut interpretation.
Abstract: In this paper, a special class of wireless networks, called wireless erasure networks, is considered. In these networks, each node is connected to a set of nodes by possibly correlated erasure channels. The network model incorporates the broadcast nature of the wireless environment by requiring each node to send the same signal on all outgoing channels. However, we assume there is no interference in reception. Such models are therefore appropriate for wireless networks where all information transmission is packetized and where some mechanism for interference avoidance is already built in. This paper looks at multicast problems over these networks. The capacity under the assumption that erasure locations on all the links of the network are provided to the destinations is obtained. It turns out that the capacity region has a nice max-flow min-cut interpretation. The definition of cut-capacity in these networks incorporates the broadcast property of the wireless medium. It is further shown that linear coding at nodes in the network suffices to achieve the capacity region. Finally, the performance of different coding schemes in these networks when no side information is available to the destinations is analyzed

Journal ArticleDOI
TL;DR: In cases of sufficiently rich information patterns between the encoder and decoder, adequate anytime capacity is shown to be sufficient for there to exist a stabilizing controller and this result is generalized to cases with noisy observations, delayed control actions, and without any explicit feedback between the observer and the controller.
Abstract: In this paper, we review how Shannon's classical notion of capacity is not enough to characterize a noisy communication channel if the channel is intended to be used as part of a feedback loop to stabilize an unstable scalar linear system. While classical capacity is not enough, another sense of capacity (parametrized by reliability) called "anytime capacity" is necessary for the stabilization of an unstable process. The required rate is given by the log of the unstable system gain and the required reliability comes from the sense of stability desired. A consequence of this necessity result is a sequential generalization of the Schalkwijk-Kailath scheme for communication over the additive white Gaussian noise (AWGN) channel with feedback. In cases of sufficiently rich information patterns between the encoder and decoder, adequate anytime capacity is also shown to be sufficient for there to exist a stabilizing controller. These sufficiency results are then generalized to cases with noisy observations, delayed control actions, and without any explicit feedback between the observer and the controller. Both necessary and sufficient conditions are extended to continuous time systems as well. We close with comments discussing a hierarchy of difficulty for communication problems and how these results establish where stabilization problems sit in that hierarchy

Journal ArticleDOI
TL;DR: This work reduces the problem of establishing minimum-cost multicast connections over coded packet networks to a polynomial-time solvable optimization problem, and presents decentralized algorithms for solving it.
Abstract: We consider the problem of establishing minimum-cost multicast connections over coded packet networks, i.e., packet networks where the contents of outgoing packets are arbitrary, causal functions of the contents of received packets. We consider both wireline and wireless packet networks as well as both static multicast (where membership of the multicast group remains constant for the duration of the connection) and dynamic multicast (where membership of the multicast group changes in time, with nodes joining and leaving the group). For static multicast, we reduce the problem to a polynomial-time solvable optimization problem, and we present decentralized algorithms for solving it. These algorithms, when coupled with existing decentralized schemes for constructing network codes, yield a fully decentralized approach for achieving minimum-cost multicast. By contrast, establishing minimum-cost static multicast connections over routed packet networks is a very difficult problem even using centralized computation, except in the special cases of unicast and broadcast connections. For dynamic multicast, we reduce the problem to a dynamic programming problem and apply the theory of dynamic programming to suggest how it may be solved.

Journal ArticleDOI
TL;DR: Outage expressions for outage probability of coded cooperation confirm that full diversity is achieved by coded cooperation, and shows that despite superficial similarities, coded cooperation is distinct from decode-and-forward, which has been shown to have diversity one.
Abstract: Cooperative communication is an emerging paradigm where multiple mobiles share their resources (bandwidth and power) to achieve better overall performance. Coded cooperation is a mechanism where cooperation is combined with-and operates through-channel coding, as opposed to the repetition-based methods. This work develops expressions for outage probability of coded cooperation. In this work, each node acts as both a data source as well as a relay, i.e., only active (transmitting) nodes are available to assist other nodes, and each node operates under overall (source + relay) power and bandwidth constraints. Outage expressions confirm that full diversity is achieved by coded cooperation. This shows that despite superficial similarities, coded cooperation is distinct from decode-and-forward, which has been shown to have diversity one. The outage probability expressions developed in this work characterize coded performance at various rates. Furthermore, outage probabilities yield bounds that are arguably more insightful than the bit-error rate (BER) results previously available for coded cooperation. Numerical comparisons shed light on the relative merits of coded cooperation and various repetition-based methods, under various inter-user and uplink channel conditions.

Journal ArticleDOI
TL;DR: This paper generalizes the stability condition to the class of Raptor codes and shows that if a sequence of output degree distributions is to achieve the capacity of the underlying channel, then the fraction of nodes of degree 2 in these degree distributions has to converge to a certain quantity depending on the channel.
Abstract: In this paper, we will investigate the performance of Raptor codes on arbitrary binary input memoryless symmetric channels (BIMSCs). In doing so, we generalize some of the results that were proved before for the erasure channel. We will generalize the stability condition to the class of Raptor codes. This generalization gives a lower bound on the fraction of output nodes of degree 2 of a Raptor code if the error probability of the belief-propagation decoder converges to zero. Using information-theoretic arguments, we will show that if a sequence of output degree distributions is to achieve the capacity of the underlying channel, then the fraction of nodes of degree 2 in these degree distributions has to converge to a certain quantity depending on the channel. For the class of erasure channels this quantity is independent of the erasure probability of the channel, but for many other classes of BIMSCs, this fraction depends on the particular channel chosen. This result has implications on the "universality" of Raptor codes for classes other than the class of erasure channels, in a sense that will be made more precise in the paper. We will also investigate the performance of specific Raptor codes which are optimized using a more exact version of the Gaussian approximation technique.

Journal ArticleDOI
TL;DR: Upper and lower bounds for the information-theoretic capacity of four-node ad hoc networks with two transmitters and two receivers using cooperative diversity are derived.
Abstract: In a cooperative diversity network, users cooperate to transmit each others' messages; to some extent nodes therefore collectively act as an antenna array and create a virtual or distributed multiple-input multiple-output (MIMO) system. In this paper, upper and lower bounds for the information-theoretic capacity of four-node ad hoc networks with two transmitters and two receivers using cooperative diversity are derived. One of the gains in a true MIMO system is a multiplexing gain in the high signal-to-noise ratio (SNR) regime, an extra factor in front of the log in the capacity expression. It is shown that cooperative diversity gives no such multiplexing gain, but it does give a high SNR additive gain, which is characterized in the paper

Journal ArticleDOI
TL;DR: This paper considers a general linear vector Gaussian channel with arbitrary signaling and pursues two closely related goals: i) closed-form expressions for the gradient of the mutual information with respect to arbitrary parameters of the system, and ii) fundamental connections between information theory and estimation theory.
Abstract: This paper considers a general linear vector Gaussian channel with arbitrary signaling and pursues two closely related goals: i) closed-form expressions for the gradient of the mutual information with respect to arbitrary parameters of the system, and ii) fundamental connections between information theory and estimation theory. Generalizing the fundamental relationship recently unveiled by Guo, Shamai, and Verdu/spl acute/, we show that the gradient of the mutual information with respect to the channel matrix is equal to the product of the channel matrix and the error covariance matrix of the best estimate of the input given the output. Gradients and derivatives with respect to other parameters are then found via the differentiation chain rule.

Journal ArticleDOI
TL;DR: It is suggested that the use of mobility to increase throughput, even slightly, in real-world networks would necessitate an abrupt and very large increase in delay.
Abstract: Gupta and Kumar (2000) introduced a random model to study throughput scaling in a wireless network with static nodes, and showed that the throughput per source-destination pair is Theta(1/radic(nlogn)). Grossglauser and Tse (2001) showed that when nodes are mobile it is possible to have a constant throughput scaling per source-destination pair. In most applications, delay is also a key metric of network performance. It is expected that high throughput is achieved at the cost of high delay and that one can be improved at the cost of the other. The focus of this paper is on studying this tradeoff for wireless networks in a general framework. Optimal throughput-delay scaling laws for static and mobile wireless networks are established. For static networks, it is shown that the optimal throughput-delay tradeoff is given by D(n)=Theta(nT(n)), where T(n) and D(n) are the throughput and delay scaling, respectively. For mobile networks, a simple proof of the throughput scaling of Theta(1) for the Grossglauser-Tse scheme is given and the associated delay scaling is shown to be Theta(nlogn). The optimal throughput-delay tradeoff for mobile networks is also established. To capture physical movement in the real world, a random-walk (RW) model for node mobility is assumed. It is shown that for throughput of Oscr(1/radic(nlogn)), which can also be achieved in static networks, the throughput-delay tradeoff is the same as in static networks, i.e., D(n)=Theta(nT(n)). Surprisingly, for almost any throughput of a higher order, the delay is shown to be Theta(nlogn), which is the delay for throughput of Theta(1). Our result, thus, suggests that the use of mobility to increase throughput, even slightly, in real-world networks would necessitate an abrupt and very large increase in delay.

Journal ArticleDOI
TL;DR: The Informed-Source Coding On Demand approach for efficiently supplying nonidentical data from a central server to multiple caching clients over a broadcast channel is presented and k-partial cliques in a directed graph are defined and cast ISCOD in terms of partial-clique covers.
Abstract: The Informed-Source Coding On Demand (ISCOD) approach for efficiently supplying nonidentical data from a central server to multiple caching clients over a broadcast channel is presented. The key idea underlying ISCOD is the joint exploitation of the data blocks already cached by each client, the server's full knowledge of client-cache contents and client requests, and the fact that each client only needs to be able to derive the blocks requested by it rather than all the blocks ever transmitted or even the union of the blocks requested by the different clients. We present two-phase ISCOD algorithms: the server first creates ad-hoc error-correction sets based on its knowledge of client states; next, it uses erasure-correction codes to construct the data for transmission. Each client uses its cached data and the received supplemental data to derive its requested blocks. The result is up to a several-fold reduction in the amount of transmitted supplemental data. Also, we define k-partial cliques in a directed graph and cast ISCOD in terms of partial-clique covers.

Journal ArticleDOI
TL;DR: All linear STBCs, that allow single-symbol ML decoding (not necessarily full-diversity) over quasi-static fading channels are characterized by calling them single-Symbol decodable designs (SDD), which are characterized and classified.
Abstract: Space-time block codes (STBCs) from orthogonal designs (ODs) and coordinate interleaved orthogonal designs (CIOD) have been attracting wider attention due to their amenability for fast (single-symbol) maximum-likelihood (ML) decoding, and full-rate with full-rank over quasi-static fading channels. However, these codes are instances of single-symbol decodable codes and it is natural to ask, if there exist codes other than STBCs form ODs and CIODs that allow single-symbol decoding? In this paper, the above question is answered in the affirmative by characterizing all linear STBCs, that allow single-symbol ML decoding (not necessarily full-diversity) over quasi-static fading channels-calling them single-symbol decodable designs (SDD). The class SDD includes ODs and CIODs as proper subclasses. Further, among the SDD, a class of those that offer full-diversity, called Full-rank SDD (FSDD) are characterized and classified. We then concentrate on square designs and derive the maximal rate for square FSDDs using a constructional proof. It follows that 1) except for N=2, square complex ODs are not maximal rate and 2) a rate one square FSDD exist only for two and four transmit antennas. For nonsquare designs, generalized coordinate-interleaved orthogonal designs (a superset of CIODs) are presented and analyzed. Finally, for rapid-fading channels an equivalent matrix channel representation is developed, which allows the results of quasi-static fading channels to be applied to rapid-fading channels. Using this representation we show that for rapid-fading channels the rate of single-symbol decodable STBCs are independent of the number of transmit antennas and inversely proportional to the block-length of the code. Significantly, the CIOD for two transmit antennas is the only STBC that is single-symbol decodable over both quasi-static and rapid-fading channels.

Journal ArticleDOI
TL;DR: It is shown that a random linear coding (RLC) based protocol disseminates all messages to all nodes in time ck+/spl Oscr/(/spl radic/kln(k)ln(n), where c<3.46 using pull-based dissemination and c<5.96 using push-based dissemination, and simulations suggest that c<2 might be a tighter bound.
Abstract: The problem of simultaneously disseminating k messages in a large network of n nodes, in a decentralized and distributed manner, where nodes only have knowledge about their own contents, is studied. In every discrete time-step, each node selects a communication partner randomly, uniformly among all nodes and only one message can be transmitted. The goal is to disseminate rapidly, with high probability, all messages to all nodes. It is shown that a random linear coding (RLC) based protocol disseminates all messages to all nodes in time ck+/spl Oscr/(/spl radic/kln(k)ln(n)), where c<3.46 using pull-based dissemination and c<5.96 using push-based dissemination. Simulations suggest that c<2 might be a tighter bound. Thus, if k/spl Gt/(ln(n))/sup 3/, the time for simultaneous dissemination RLC is asymptotically at most ck, versus the /spl Omega/(klog/sub 2/(n)) time of sequential dissemination. Furthermore, when k/spl Gt/(ln(n))/sup 3/, the dissemination time is order optimal. When k/spl Lt/(ln(n))/sup 2/, RLC reduces dissemination time by a factor of /spl Omega/(/spl radic/k/lnk) over sequential dissemination. The overhead of the RLC protocol is negligible for messages of reasonable size. A store-and-forward mechanism without coding is also considered. It is shown that this approach performs no better than a sequential approach when k=/spl prop/n. Owing to the distributed nature of the system, the proof requires analysis of an appropriate time-varying Bernoulli process.

Journal ArticleDOI
TL;DR: Two coding schemes that do not require the relay to decode any part of the message are investigated and it is shown that the "side-information coding scheme" can outperform the block-Markov coding scheme and the achievable rate of the side- information coding scheme can be improved via time sharing.
Abstract: Upper and lower bounds on the capacity and minimum energy-per-bit for general additive white Gaussian noise (AWGN) and frequency-division AWGN (FD-AWGN) relay channel models are established. First, the max-flow min-cut bound and the generalized block-Markov coding scheme are used to derive upper and lower bounds on capacity. These bounds are never tight for the general AWGN model and are tight only under certain conditions for the FD-AWGN model. Two coding schemes that do not require the relay to decode any part of the message are then investigated. First, it is shown that the "side-information coding scheme" can outperform the block-Markov coding scheme. It is also shown that the achievable rate of the side-information coding scheme can be improved via time sharing. In the second scheme, the relaying functions are restricted to be linear. The problem is reduced to a "single-letter" nonconvex optimization problem for the FD-AWGN model. The paper also establishes a relationship between the minimum energy-per-bit and capacity of the AWGN relay channel. This relationship together with the lower and upper bounds on capacity are used to establish corresponding lower and upper bounds on the minimum energy-per-bit that do not differ by more than a factor of 1.45 for the FD-AWGN relay channel model and 1.7 for the general AWGN model.