scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Theory in 2013"


Journal ArticleDOI
TL;DR: JSDM achieves significant savings both in the downlink training and in the CSIT uplink feedback, thus making the use of large antenna arrays at the base station potentially suitable also for frequency division duplexing systems, for which uplink/downlink channel reciprocity cannot be exploited.
Abstract: We propose joint spatial division and multiplexing (JSDM), an approach to multiuser MIMO downlink that exploits the structure of the correlation of the channel vectors in order to allow for a large number of antennas at the base station while requiring reduced-dimensional channel state information at the transmitter (CSIT). JSDM achieves significant savings both in the downlink training and in the CSIT uplink feedback, thus making the use of large antenna arrays at the base station potentially suitable also for frequency division duplexing (FDD) systems, for which uplink/downlink channel reciprocity cannot be exploited. In the proposed scheme, the multiuser MIMO downlink precoder is obtained by concatenating a prebeamforming matrix, which depends only on the channel second-order statistics, with a classical multiuser precoder, based on the instantaneous knowledge of the resulting reduced dimensional “effective” channel matrix. We prove a simple condition under which JSDM incurs no loss of optimality with respect to the full CSIT case. For linear uniformly spaced arrays, we show that such condition is approached in the large number of antennas limit. For this case, we use Szego's asymptotic theory of Toeplitz matrices to show that a DFT-based prebeamforming matrix is near-optimal, requiring only coarse information about the users angles of arrival and angular spread. Finally, we extend these ideas to the case of a 2-D base station antenna array, with 3-D beamforming, including multiple beams in the elevation angle direction. We provide guidelines for the prebeamforming optimization and calculate the system spectral efficiency under proportional fairness and max-min fairness criteria, showing extremely attractive performance. Our numerical results are obtained via asymptotic random matrix theory, avoiding lengthy Monte Carlo simulations and providing accurate results for realistic (finite) number of antennas and users.

1,347 citations


Journal ArticleDOI
TL;DR: This work shows that the uncoded optimum file assignment is NP-hard, and develops a greedy strategy that is provably within a factor 2 of the optimum, and provides an efficient algorithm achieving a provably better approximation ratio of 1-1/d d, where d is the maximum number of helpers a user can be connected to.
Abstract: Video on-demand streaming from Internet-based servers is becoming one of the most important services offered by wireless networks today. In order to improve the area spectral efficiency of video transmission in cellular systems, small cells heterogeneous architectures (e.g., femtocells, WiFi off-loading) are being proposed, such that video traffic to nomadic users can be handled by short-range links to the nearest small cell access points (referred to as “helpers”). As the helper deployment density increases, the backhaul capacity becomes the system bottleneck. In order to alleviate such bottleneck we propose a system where helpers with low-rate backhaul but high storage capacity cache popular video files. Files not available from helpers are transmitted by the cellular base station. We analyze the optimum way of assigning files to the helpers, in order to minimize the expected downloading time for files. We distinguish between the uncoded case (where only complete files are stored) and the coded case, where segments of Fountain-encoded versions of the video files are stored at helpers. We show that the uncoded optimum file assignment is NP-hard, and develop a greedy strategy that is provably within a factor 2 of the optimum. Further, for a special case we provide an efficient algorithm achieving a provably better approximation ratio of 1-(1-1/d )d, where d is the maximum number of helpers a user can be connected to. We also show that the coded optimum cache assignment problem is convex that can be further reduced to a linear program. We present numerical results comparing the proposed schemes.

1,331 citations


Journal ArticleDOI
TL;DR: This paper investigates the problem of estimating the frequency components of a mixture of s complex sinusoids from a random subset of n regularly spaced samples and proposes an atomic norm minimization approach to exactly recover the unobserved samples and identify the unknown frequencies.
Abstract: This paper investigates the problem of estimating the frequency components of a mixture of s complex sinusoids from a random subset of n regularly spaced samples. Unlike previous work in compressed sensing, the frequencies are not assumed to lie on a grid, but can assume any values in the normalized frequency domain [0, 1]. An atomic norm minimization approach is proposed to exactly recover the unobserved samples and identify the unknown frequencies, which is then reformulated as an exact semidefinite program. Even with this continuous dictionary, it is shown that O(slog s log n) random samples are sufficient to guarantee exact frequency localization with high probability, provided the frequencies are well separated. Extensive numerical experiments are performed to illustrate the effectiveness of the proposed method.

920 citations


Journal ArticleDOI
TL;DR: A method for efficiently constructing polar codes is presented and analyzed, proving that for any fixed ε > 0 and all sufficiently large code lengths n, polar codes whose rate is within ε of channel capacity can be constructed in time and space that are both linear in n.
Abstract: A method for efficiently constructing polar codes is presented and analyzed. Although polar codes are explicitly defined, straightforward construction is intractable since the resulting polar bit-channels have an output alphabet that grows exponentially with the code length. Thus, the core problem that needs to be solved is that of faithfully approximating a bit-channel with an intractably large alphabet by another channel having a manageable alphabet size. We devise two approximation methods which “sandwich” the original bit-channel between a degraded and an upgraded version thereof. Both approximations can be efficiently computed and turn out to be extremely close in practice. We also provide theoretical analysis of our construction algorithms, proving that for any fixed e > 0 and all sufficiently large code lengths n, polar codes whose rate is within e of channel capacity can be constructed in time and space that are both linear in n.

755 citations


Journal ArticleDOI
TL;DR: This paper investigates an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement, and introduces the binary iterative hard thresholding algorithm for signal reconstruction from 1-bit measurements that offers state-of-the-art performance.
Abstract: The compressive sensing (CS) framework aims to ease the burden on analog-to-digital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is an inverse relationship between the achievable sampling rate and the bit depth. In this paper, we investigate an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement. In particular, we explore the extreme case of 1-bit CS measurements, which capture just their sign. Our results come in two flavors. First, we consider ideal reconstruction from noiseless 1-bit measurements and provide a lower bound on the best achievable reconstruction error. We also demonstrate that i.i.d. random Gaussian matrices provide measurement mappings that, with overwhelming probability, achieve nearly optimal error decay. Next, we consider reconstruction robustness to measurement errors and noise and introduce the binary e-stable embedding property, which characterizes the robustness of the measurement process to sign changes. We show that the same class of matrices that provide almost optimal noiseless performance also enable such a robust mapping. On the practical side, we introduce the binary iterative hard thresholding algorithm for signal reconstruction from 1-bit measurements that offers state-of-the-art performance.

645 citations


Journal ArticleDOI
TL;DR: In general, a new family of r-erasure correcting MDS array codes are constructed that has optimal rebuilding ratio of [1/(r)] in the case of a single erasure.
Abstract: Maximum distance separable (MDS) array codes are widely used in storage systems to protect data against erasures. We address the rebuilding ratio problem, namely, in the case of erasures, what is the fraction of the remaining information that needs to be accessed in order to rebuild exactly the lost information? It is clear that when the number of erasures equals the maximum number of erasures that an MDS code can correct, then the rebuilding ratio is 1 (access all the remaining information). However, the interesting and more practical case is when the number of erasures is smaller than the erasure correcting capability of the code. For example, consider an MDS code that can correct two erasures: What is the smallest amount of information that one needs to access in order to correct a single erasure? Previous work showed that the rebuilding ratio is bounded between [1/2] and [3/4]; however, the exact value was left as an open problem. In this paper, we solve this open problem and prove that for the case of a single erasure with a two-erasure correcting code, the rebuilding ratio is [1/2]. In general, we construct a new family of r-erasure correcting MDS array codes that has optimal rebuilding ratio of [1/(r)] in the case of a single erasure. Our array codes have efficient encoding and decoding algorithms (for the cases r=2 and r=3, they use a finite field of size 3 and 4, respectively) and an optimal update property.

399 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that O(s log n/s) Bernoulli trials are sufficient to estimate a coefficient vector in which is approximately -sparse, even when each measurement bit is flipped with probability nearly 1 2.
Abstract: This paper develops theoretical results regarding noisy 1-bit compressed sensing and sparse binomial regression. We demonstrate that a single convex program gives an accurate estimate of the signal, or coefficient vector, for both of these models. We show that an -sparse signal in can be accurately estimated from m = O(s log(n/s)) single-bit measurements using a simple convex program. This remains true even if each measurement bit is flipped with probability nearly 1/2. Worst-case (adversarial) noise can also be accounted for, and uniform results that hold for all sparse inputs are derived as well. In the terminology of sparse logistic regression, we show that O (s log (2n/s)) Bernoulli trials are sufficient to estimate a coefficient vector in which is approximately -sparse. Moreover, the same convex program works for virtually all generalized linear models, in which the link function may be unknown. To our knowledge, these are the first results that tie together the theory of sparse logistic regression to 1-bit compressed sensing. Our results apply to general signal structures aside from sparsity; one only needs to know the size of the set where signals reside. The size is given by the mean width of K, a computable quantity whose square serves as a robust extension of the dimension.

394 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the high-power spectral efficiency is upper bounded by a quantity that does not depend on the transmit powers, and that cooperation is possible only within clusters of limited size, which are subject to out-of-cluster interference whose power scales with that of the incluster signals.
Abstract: Cooperation is viewed as a key ingredient for interference management in wireless networks. This paper shows that cooperation has fundamental limitations. First, it is established that in systems that rely on pilot-assisted channel estimation, the spectral efficiency is upper-bounded by a quantity that does not depend on the transmit powers; in this framework, cooperation is possible only within clusters of limited size, which are subject to out-of-cluster interference whose power scales with that of the in-cluster signals. Second, an upper bound is also shown to exist if the cooperation extends to an entire (large) system operating as a single cluster; here, pilot-assisted transmission is necessarily transcended. Altogether, it is concluded that cooperation cannot in general change an interference-limited network to a noise-limited one. Consequently, the existing literature that routinely assumes that the high-power spectral efficiency scales with the log-scale transmit power provides only a partial characterization. The complete characterization proposed in this paper subdivides the high-power regime into a degree-of-freedom regime, where the scaling with the log-scale transmit power holds approximately, and a saturation regime, where the spectral efficiency hits a ceiling that is independent of the power. Using a cellular system as an example, it is demonstrated that the spectral efficiency saturates at power levels of operational relevance.

363 citations


Journal ArticleDOI
TL;DR: The key technical result is a proof that, under belief-propagation decoding, spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble.
Abstract: We investigate spatially coupled code ensembles. For transmission over the binary erasure channel, it was recently shown that spatial coupling increases the belief propagation threshold of the ensemble to essentially the maximum a priori threshold of the underlying component ensemble. This explains why convolutional LDPC ensembles, originally introduced by Felstrom and Zigangirov, perform so well over this channel. We show that the equivalent result holds true for transmission over general binary-input memoryless output-symmetric channels. More precisely, given a desired error probability and a gap to capacity, we can construct a spatially coupled ensemble that fulfills these constraints universally on this class of channels under belief propagation decoding. In fact, most codes in this ensemble have this property. The quantifier universal refers to the single ensemble/code that is good for all channels but we assume that the channel is known at the receiver. The key technical result is a proof that, under belief-propagation decoding, spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble. We conclude by discussing some interesting open problems.

356 citations


Journal ArticleDOI
TL;DR: It is shown that the compress-and-forward (CF) protocol achieves exchange rates within a constant bit offset of the optimal exchange rate, independent of the power constraints of the terminals in the network.
Abstract: The multiuser communication channel, in which multiple users exchange information with the help of a relay terminal, termed the multiway relay channel (mRC), is introduced. In this model, multiple interfering clusters of users communicate simultaneously, such that the users within the same cluster wish to exchange messages among themselves, i.e., each user multicasts its message to all the other users in its own cluster. It is assumed that the users cannot receive each other's signals directly. Hence, the relay terminal in this model is the enabler of communication. In particular, restricted encoders are considered, such that the encoding function of each user depends only on its own message and the received signal is used only for decoding the messages of the other users in the cluster. Achievable rate regions and an outer bound are characterized for the Gaussian mRC, and their comparison is presented in terms of the exchange rate, the symmetric rate point in the capacity region in a symmetric Gaussian mRC scenario. It is shown that the compress-and-forward (CF) protocol achieves exchange rates within a constant bit offset of the optimal exchange rate, independent of the power constraints of the terminals in the network. A finite bit gap between the exchange rates achieved by the CF and the amplify-and-forward protocols is also shown. The two special cases of the mRC, the full data exchange model, in which every user wants to receive messages of all other users, and the pairwise data exchange model which consists of multiple two-way relay channels, are investigated in detail. In particular for the pairwise data exchange model, in addition to the proposed random coding-based achievable schemes, a nested lattice coding-based scheme is also presented and is shown to achieve exchange rates within a constant bit gap of the exchange capacity.

341 citations


Journal ArticleDOI
TL;DR: This work describes the optimal degrees of freedom region for this more general two-user MISO broadcast correlated channel where the transmitter has imperfect knowledge of the current channel state, in addition to delayed channel state information.
Abstract: We consider the time correlated multiple-input single-output (MISO) broadcast channel where the transmitter has imperfect knowledge of the current channel state, in addition to delayed channel state information. By representing the quality of the current channel state information as P-α for the signal-to-noise ratio P and some constant α ≥ 0, we characterize the optimal degrees of freedom region for this more general two-user MISO broadcast correlated channel. The essential ingredients of the proposed scheme lie in the quantization and multicast of the overheard interferences, while broadcasting new private messages. Our proposed scheme smoothly bridges between the scheme recently proposed by Maddah-Ali and Tse with no current state information and a simple zero-forcing beamforming with perfect current state information.

Journal ArticleDOI
TL;DR: The derivation establishes a hierarchy of information quantities that can be used to investigate information theoretic tasks in the quantum domain: the one-shot entropies most accurately describe an operational quantity, yet they tend to be difficult to calculate for large systems.
Abstract: We consider two fundamental tasks in quantum information theory, data compression with quantum side information, as well as randomness extraction against quantum side information. We characterize these tasks for general sources using so-called one-shot entropies. These characterizations-in contrast to earlier results-enable us to derive tight second-order asymptotics for these tasks in the i.i.d. limit. More generally, our derivation establishes a hierarchy of information quantities that can be used to investigate information theoretic tasks in the quantum domain: The one-shot entropies most accurately describe an operational quantity, yet they tend to be difficult to calculate for large systems. We show that they asymptotically agree (up to logarithmic terms) with entropies related to the quantum and classical information spectrum, which are easier to calculate in the i.i.d. limit. Our technique also naturally yields bounds on operational quantities for finite block lengths.

Journal ArticleDOI
TL;DR: A closed-form expression for the information capacity of an MC system based on the free diffusion of molecules, which is of primary importance to understand the performance of the MC paradigm.
Abstract: Molecular Communication (MC) is a communication paradigm based on the exchange of molecules. The implicit biocompatibility and nanoscale feasibility of MC make it a promising communication technology for nanonetworks. This paper provides a closed-form expression for the information capacity of an MC system based on the free diffusion of molecules, which is of primary importance to understand the performance of the MC paradigm. Unlike previous contributions, the provided capacity expression is independent from any coding scheme and takes into account the two main effects of the diffusion channel: the memory and the molecular noise. For this, the diffusion is decomposed into two processes, namely, the Fick's diffusion and the particle location displacement, which are analyzed as a cascade of two separate systems. The Fick's diffusion captures solely the channel memory, while the particle location displacement isolates the molecular noise. The MC capacity expression is obtained by combining the two systems as function of the diffusion coefficient, the temperature, the transmitter-receiver distance, the bandwidth of the transmitted signal, and the average transmitted power. Numerical results show that a few kilobits per second can be reached within a distance range of tenth of micrometer and for an average transmitted power around 1 pW.

Journal ArticleDOI
TL;DR: The approximate message passing (AMP) algorithm is extended to solve the complex-valued LASSO problem and obtained the complex approximate message passed algorithm (CAMP), and the state evolution framework recently introduced for the analysis of AMP is generalized to the complex setting.
Abstract: Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complex-valued. We study the popular recovery method of l1-regularized least squares or LASSO. While several studies have shown that LASSO provides desirable solutions under certain conditions, the precise asymptotic performance of this algorithm in the complex setting is not yet known. In this paper, we extend the approximate message passing (AMP) algorithm to solve the complex-valued LASSO problem and obtain the complex approximate message passing algorithm (CAMP). We then generalize the state evolution framework recently introduced for the analysis of AMP to the complex setting. Using the state evolution, we derive accurate formulas for the phase transition and noise sensitivity of both LASSO and CAMP. Our theoretical results are concerned with the case of i.i.d. Gaussian sensing matrices. Simulations confirm that our results hold for a larger class of random matrices.

Journal ArticleDOI
Paul Cuff1
TL;DR: This paper characterizes the optimal tradeoff between the amount of common randomness used and the required rate of description and generalizes and strengthens a soft covering lemma, known in the literature for its role in quantifying the resolvability of a channel.
Abstract: Two familiar notions of correlation are rediscovered as the extreme operating points for distributed synthesis of a discrete memoryless channel, in which a stochastic channel output is generated based on a compressed description of the channel input. Wyner's common information is the minimum description rate needed. However, when common randomness independent of the input is available, the necessary description rate reduces to Shannon's mutual information. This paper characterizes the optimal tradeoff between the amount of common randomness used and the required rate of description. We also include a number of related derivations, including the effect of limited local randomness, rate requirements for secrecy, applications to game theory, and new insights into common information duality. Our proof makes use of a soft covering lemma, known in the literature for its role in quantifying the resolvability of a channel. The direct proof (achievability) constructs a feasible joint distribution over all parts of the system using a soft covering, from which the behavior of the encoder and decoder is inferred, with no explicit reference to joint typicality or binning. Of auxiliary interest, this paper also generalizes and strengthens this soft covering tool.

Journal ArticleDOI
TL;DR: This paper examines the bandit problem under the weaker assumption that the distributions have moments of order 1 + ε, and derives matching lower bounds that show that the best achievable regret deteriorates when ε <; 1.
Abstract: The stochastic multiarmed bandit problem is well understood when the reward distributions are sub-Gaussian. In this paper, we examine the bandit problem under the weaker assumption that the distributions have moments of order 1 + e, for some e ∈ (0,1]. Surprisingly, moments of order 2 (i.e., finite variance) are sufficient to obtain regret bounds of the same order as under sub-Gaussian reward distributions. In order to achieve such regret, we define sampling strategies based on refined estimators of the mean such as the truncated empirical mean, Catoni's M-estimator, and the median-of-means estimator. We also derive matching lower bounds that also show that the best achievable regret deteriorates when e <; 1.

Journal ArticleDOI
TL;DR: This paper addresses exact repair MDS codes, which allow for any single failed node to be repaired exactly with access to any arbitrary set of d survivor nodes, and characterize the capacity of a class of multisource nonmulticast networks.
Abstract: The high repair bandwidth cost of (n,k) maximum distance separable (MDS) erasure codes has motivated a new class of codes that can reduce repair bandwidth over that of conventional MDS codes. In this paper, we address (n,k,d) exact repair MDS codes, which allow for any single failed node to be repaired exactly with access to any arbitrary set of d survivor nodes. We show the existence of exact repair MDS codes that achieve minimum repair bandwidth (matching the cut-set lower bound) for arbitrary admissible (n,k,d), i.e., k ≤ d ≤ n-1. Moreover, we extend our results to show the optimality of our codes for multiple-node failure scenarios in which an arbitrary set of r ≤ n-k failed nodes needs to repaired. Our approach is based on asymptotic interference alignment proposed by Cadambe and Jafar. As a byproduct, we also characterize the capacity of a class of multisource nonmulticast networks.

Journal ArticleDOI
TL;DR: An approximate message passing (AMP) algorithm is used and a rigorous proof is given that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Rényi information dimension of the signal, d̅(pX).
Abstract: We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala [30], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Renyi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n+o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For “discrete” signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal pX.

Journal ArticleDOI
TL;DR: It is demonstrated that testing whether a matrix satisfies RIP is NP-hard, which means it is impossible to efficiently test for RIP provided P ≠ NP.
Abstract: This paper is concerned with an important matrix condition in compressed sensing known as the restricted isometry property (RIP). We demonstrate that testing whether a matrix satisfies RIP is NP-hard. As a consequence of our result, it is impossible to efficiently test for RIP provided P ≠ NP.

Journal ArticleDOI
TL;DR: It is shown that at least for symmetric wiretap channels, random capacity- based constructions fail to achieve the strong secrecy capacity, while channel-resolvability-based constructions achieve it.
Abstract: We analyze physical-layer security based on the premise that the coding mechanism for secrecy over noisy channels is tied to the notion of channel resolvability. Instead of considering capacity-based constructions, which associate to each message a subcode that operates just below the capacity of the eavesdropper's channel, we consider channel-resolvability-based constructions, which associate to each message a subcode that operates just above the resolvability of the eavesdropper's channel. Building upon the work of Csiszar and Hayashi, we provide further evidence that channel resolvability is a powerful and versatile coding mechanism for secrecy by developing results that hold for strong secrecy metrics and arbitrary channels. Specifically, we show that at least for symmetric wiretap channels, random capacity-based constructions fail to achieve the strong secrecy capacity, while channel-resolvability-based constructions achieve it. We then leverage channel resolvability to establish the secrecy-capacity region of arbitrary broadcast channels with confidential messages and a cost constraint for strong secrecy metrics. Finally, we specialize our results to study the secrecy capacity of wireless channels with perfect channel state information (CSI), mixed channels, and compound channels with receiver CSI, as well as the secret-key capacity of source models for secret-key agreement. By tying secrecy to channel resolvability, we obtain achievable rates for strong secrecy metrics with simple proofs.

Journal ArticleDOI
TL;DR: A new polar coding scheme is proposed, which can attain the channel capacity without any alphabet extension by invoking results on polar coding for lossless compression, and it is shown that the proposed scheme achieves a better tradeoff between complexity and decoding error probability in many cases.
Abstract: This paper considers polar coding for asymmetric settings, that is, channel coding for asymmetric channels and lossy source coding for nonuniform sources and/or asymmetric distortion measures. The difficulty for asymmetric settings comes from the fact that the optimal symbol distributions of codewords are not always uniform. It is known that such nonuniform distributions can be realized by Gallager's scheme which maps multiple auxiliary symbols distributed uniformly to an actual symbol. However, the complexity of Gallager's scheme increases considerably for the case that the optimal distribution cannot be approximated by simple rational numbers. To overcome this problem for the asymmetric settings, a new polar coding scheme is proposed, which can attain the channel capacity without any alphabet extension by invoking results on polar coding for lossless compression. It is also shown that the proposed scheme achieves a better tradeoff between complexity and decoding error probability in many cases.

Journal ArticleDOI
TL;DR: In this article, the authors present a formula that characterizes the allowed undersampling of generalized sparse objects for approximate message passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser.
Abstract: Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to approximate message passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization-the firm shrinkage nonlinearity and the minimax nonlinearity-and also nonscalar denoisers-block thresholding, monotone regression, and total variation minimization. Let the variables e = k/N and δ = n/N denote the generalized sparsity and undersampling fractions for sampling the k-generalized-sparse N-vector x0 according to y=Ax0. Here, A is an n×N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve δ = δ(e) separating successful from unsuccessful reconstruction of x0 by AMP is given by δ = M(e|Denoiser) where M(e|Denoiser) denotes the per-coordinate minimax mean squared error (MSE) of the specified, optimally tuned denoiser in the directly observed problem y = x + z. In short, the phase transition of a noiseless undersampling problem is identical to the minimax MSE in a denoising problem. We prove that this formula follows from state evolution and present numerical results validating it in a wide range of settings. The above formula generates numerous new insights, both in the scalar and in the nonscalar cases.

Journal ArticleDOI
TL;DR: A spectral graph analogy to Heisenberg's celebrated uncertainty principle is developed, which provides a fundamental tradeoff between a signal's localization on a graph and in its spectral domain.
Abstract: The spectral theory of graphs provides a bridge between classical signal processing and the nascent field of graph signal processing. In this paper, a spectral graph analogy to Heisenberg's celebrated uncertainty principle is developed. Just as the classical result provides a tradeoff between signal localization in time and frequency, this result provides a fundamental tradeoff between a signal's localization on a graph and in its spectral domain. Using the eigenvectors of the graph Laplacian as a surrogate Fourier basis, quantitative definitions of graph and spectral “spreads” are given, and a complete characterization of the feasibility region of these two quantities is developed. In particular, the lower boundary of the region, referred to as the uncertainty curve, is shown to be achieved by eigenvectors associated with the smallest eigenvalues of an affine family of matrices. The convexity of the uncertainty curve allows it to be found to within e by a fast approximation algorithm requiring O(e-1/2) typically sparse eigenvalue evaluations. Closed-form expressions for the uncertainty curves for some special classes of graphs are derived, and an accurate analytical approximation for the expected uncertainty curve of Erd-s-Renyi random graphs is developed. These theoretical results are validated by numerical experiments, which also reveal an intriguing connection between diffusion processes on graphs and the uncertainty bounds.

Journal ArticleDOI
TL;DR: This study uses Hadamard matrices to construct the first explicit two-parity MDS storage code with optimal repair properties for all single node failures, including the parities, and generalizes this construction to design high-rate maximum-distance separable codes that achieve the optimum repair communication for single systematic node failures.
Abstract: In distributed storage systems that employ erasure coding, the issue of minimizing the total communication required to exactly rebuild a storage node after a failure arises. This repair bandwidth depends on the structure of the storage code and the repair strategies used to restore the lost data. Designing high-rate maximum-distance separable (MDS) codes that achieve the optimum repair communication has been a well-known open problem. Our work resolves, in part, this open problem. In this study, we use Hadamard matrices to construct the first explicit two-parity MDS storage code with optimal repair properties for all single node failures, including the parities. Our construction relies on a novel method of achieving perfect interference alignment over finite fields with a finite number of symbol extensions. We generalize this construction to design $m$ -parity MDS codes that achieve the optimum repair communication for single systematic node failures.

Journal ArticleDOI
TL;DR: In this article, the authors considered the recovery of a low-rank matrix from an observed version that simultaneously contains both erasures, most entries are not observed, and errors, values at a constant fraction of (unknown) locations are arbitrarily corrupted.
Abstract: This paper considers the recovery of a low-rank matrix from an observed version that simultaneously contains both 1) erasures, most entries are not observed, and 2) errors, values at a constant fraction of (unknown) locations are arbitrarily corrupted. We provide a new unified performance guarantee on when minimizing nuclear norm plus l1 norm succeeds in exact recovery. Our result allows for the simultaneous presence of random and deterministic components in both the error and erasure patterns. By specializing this one single result in different ways, we recover (up to poly-log factors) as corollaries all the existing results in exact matrix completion, and exact sparse and low-rank matrix decomposition. Our unified result also provides the first guarantees for 1) recovery when we observe a vanishing fraction of entries of a corrupted matrix, and 2) deterministic matrix completion.

Journal ArticleDOI
TL;DR: In this article, the authors show that under conditions similar to those required in the linear setting, the iterative hard thresholding algorithm can be used to accurately recover sparse or structured signals from few nonlinear observations.
Abstract: Nonconvex constraints are valuable regularizers in many optimization problems. In particular, sparsity constraints have had a significant impact on sampling theory, where they are used in compressed sensing and allow structured signals to be sampled far below the rate traditionally prescribed. Nearly, all of the theory developed for compressed sensing signal recovery assumes that samples are taken using linear measurements. In this paper, we instead address the compressed sensing recovery problem in a setting where the observations are nonlinear. We show that, under conditions similar to those required in the linear setting, the iterative hard thresholding algorithm can be used to accurately recover sparse or structured signals from few nonlinear observations. Similar ideas can also be developed in a more general nonlinear optimization framework. In the second part of this paper, we therefore present related result that shows how this can be done under sparsity and union of subspaces constraints, whenever a generalization of the restricted isometry property traditionally imposed on the compressed sensing system holds.

Journal ArticleDOI
TL;DR: This paper is a complete characterization of the DoF region of the two-user MISO BC with alternating CSIT, and the region is found to depend only on the marginal probabilities.
Abstract: The degrees of freedom (DoFs) of the two-user multiple-input single-output (MISO) broadcast channel (BC) are studied under the assumption that the form, Ii, i=1, 2, of the channel state information at the transmitter (CSIT) for each user's channel can be either perfect (P), delayed (D), or not available (N), i.e., I1,I2 ∈ {P,N,D} , and therefore, the overall CSIT can alternate between the nine resulting states I1I2. The fraction of time associated with CSIT state I1I2 is denoted by the parameter λI1I2 and it is assumed throughout that λI1I2 = λI2I1, i.e., λPN = λNP, λPD=λDP, λDN=λND . Under this assumption of symmetry, the main contribution of this paper is a complete characterization of the DoF region of the two-user MISO BC with alternating CSIT. Surprisingly, the DoF region is found to depend only on the marginal probabilities (λP, λD,λN) = (ΣI2 λPI2, ΣI2 λDI2, ΣI2 λNI2), I2 ∈ {P, D, N}, which represent the fraction of time that any given user (e.g., user 1) is associated with perfect, delayed, or no CSIT, respectively. As a consequence, the DoF region with all nine CSIT states, D(λI1I2:I1,I2 ∈ {P,D,N}) , is the same as the DoF region with only three CSIT states D(λPP, λDD, λNN), under the same marginal distribution of CSIT states, i.e., (λPP, λDD,λNN)=(λP,λD,λN). The sum-DoF value can be expressed as DoF=min([(4+2λP)/3], 1+λP+λD), from which one can uniquely identify the minimum required marginal CSIT fractions to achieve any target DoF value as (λP,λD)min=([3/2] DoF-2,1- [1/2] DoF) when DoF ∈ [[4/3],2] and (λP,λD)min=(0,(DoF-1)+) when DoF ∈ [0, [4/3]). The results highlight the synergistic benefits of alternating CSIT and the tradeoffs between various forms of CSIT for any given DoF value. Partial results are also presented for the multiuser MISO BC with M transmit antennas and K single antenna users. For this problem, the minimum amount of perfect CSIT required per user to achieve the maximum DoFs of min(M,K) is characterized. By the minimum amount of CSIT per user, we refer to the minimum fraction of time that the transmitter has access to perfect and instantaneous CSIT from a user. Through a novel converse proof and an achievable scheme, it is shown that the minimum fraction of time perfect CSIT is required per user in order to achieve the DoF of min(M,K) is given by min(M,K)/K.

Journal ArticleDOI
TL;DR: Low-complexity ATs and UTs selection schemes are presented and it is demonstrated through Monte Carlo simulation that the proposed schemes essentially eliminate the problem of rank deficiency of the system matrix and greatly mitigate the noninteger penalty affecting CoF/RCoF at high SNR.
Abstract: We study a distributed antenna system where L antenna terminals (ATs) are connected to a central processor (CP) via digital error-free links of finite capacity R0, and serve K user terminals (UTs). This model has been widely investigated both for the uplink (UTs to CP) and for the downlink (CP to UTs), which are instances of the general multiple-access relay and broadcast relay networks. We contribute to the subject in the following ways: 1) For the uplink, we consider the recently proposed “compute and forward” (CoF) approach and examine the corresponding system optimization at finite SNR. 2) For the downlink, we propose a novel precoding scheme nicknamed “reverse compute and forward” (RCoF). 3) In both cases, we present low-complexity versions of CoF and RCoF based on standard scalar quantization at the receivers, that lead to discrete-input discrete-output symmetric memoryless channel models for which near-optimal performance can be achieved by standard single-user linear coding. 4) We provide extensive numerical results and finite SNR comparison with other “state of the art” information theoretic techniques, in scenarios including fading and shadowing. The proposed uplink and downlink system optimization focuses specifically on the ATs and UTs selection problem. In both cases, for a given set of transmitters, the goal consists of selecting a subset of the receivers such that the corresponding system matrix has full rank and the sum rate is maximized. We present low-complexity ATs and UTs selection schemes and demonstrate through Monte Carlo simulation that the proposed schemes essentially eliminate the problem of rank deficiency of the system matrix and greatly mitigate the noninteger penalty affecting CoF/RCoF at high SNR. Comparison with other state-of-the art information theoretic schemes, show competitive performance of the proposed approaches with significantly lower complexity.

Journal ArticleDOI
TL;DR: It turns out that the local delay behaves rather differently in the two cases of high mobility and no mobility, and the low- and high-rate asymptotic behavior of the minimum achievable delay in each case is provided.
Abstract: Communication between two neighboring nodes is a very basic operation in wireless networks. Yet very little research has focused on the local delay in networks with randomly placed nodes, defined as the mean time it takes a node to connect to its nearest neighbor. We study this problem for Poisson networks, first considering interference only, then noise only, and finally and briefly, interference plus noise. In the noiseless case, we analyze four different types of nearest-neighbor communication and compare the extreme cases of high mobility, where a new Poisson process is drawn in each time slot, and no mobility, where only a single realization exists and nodes stay put forever. It turns out that the local delay behaves rather differently in the two cases. We also provide the low- and high-rate asymptotic behavior of the minimum achievable delay in each case. In the cases with noise, power control is essential to keep the delay finite, and randomized power control can drastically reduce the required (mean) power for finite local delay.

Journal ArticleDOI
TL;DR: In this article, a general framework is developed for studying nested-lattice-based PNC schemes, called lattice network coding (LNC) schemes for short, by making a direct connection between C&F and module theory.
Abstract: The problem of designing physical-layer network coding (PNC) schemes via nested lattices is considered. Building on the compute-and-forward (C&F) relaying strategy of Nazer and Gastpar, who demonstrated its asymptotic gain using information-theoretic tools, an algebraic approach is taken to show its potential in practical, nonasymptotic, settings. A general framework is developed for studying nested-lattice-based PNC schemes-called lattice network coding (LNC) schemes for short-by making a direct connection between C&F and module theory. In particular, a generic LNC scheme is presented that makes no assumptions on the underlying nested lattice code. C&F is reinterpreted in this framework, and several generalized constructions of LNC schemes are given. The generic LNC scheme naturally leads to a linear network coding channel over modules, based on which noncoherent network coding can be achieved. Next, performance/complexity tradeoffs of LNC schemes are studied, with a particular focus on hypercube-shaped LNC schemes. The error probability of this class of LNC schemes is largely determined by the minimum intercoset distances of the underlying nested lattice code. Several illustrative hypercube-shaped LNC schemes are designed based on Constructions A and D, showing that nominal coding gains of 3 to 7.5 dB can be obtained with reasonable decoding complexity. Finally, the possibility of decoding multiple linear combinations is considered and related to the shortest independent vectors problem. A notion of dominant solutions is developed together with a suitable lattice-reduction-based algorithm.