scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Theory in 2012"


Journal ArticleDOI
TL;DR: Stagewise Orthogonal Matching Pursuit (StOMP) successively transforms the signal into a negligible residual, and numerical examples showing that StOMP rapidly and reliably finds sparse solutions in compressed sensing, decoding of error-correcting codes, and overcomplete representation are given.
Abstract: Finding the sparsest solution to underdetermined systems of linear equations y = Φx is NP-hard in general. We show here that for systems with “typical”/“random” Φ, a good approximation to the sparsest solution is obtained by applying a fixed number of standard operations from linear algebra. Our proposal, Stagewise Orthogonal Matching Pursuit (StOMP), successively transforms the signal into a negligible residual. Starting with initial residual r0 = y, at the s -th stage it forms the “matched filter” ΦTrs-1, identifies all coordinates with amplitudes exceeding a specially chosen threshold, solves a least-squares problem using the selected coordinates, and subtracts the least-squares fit, producing a new residual. After a fixed number of stages (e.g., 10), it stops. In contrast to Orthogonal Matching Pursuit (OMP), many coefficients can enter the model at each stage in StOMP while only one enters per stage in OMP; and StOMP takes a fixed number of stages (e.g., 10), while OMP can take many (e.g., n). We give both theoretical and empirical support for the large-system effectiveness of StOMP. We give numerical examples showing that StOMP rapidly and reliably finds sparse solutions in compressed sensing, decoding of error-correcting codes, and overcomplete representation.

1,416 citations


Journal ArticleDOI
TL;DR: This work analyzes an intuitive Gaussian process upper confidence bound algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions.
Abstract: Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze an intuitive Gaussian process upper confidence bound (GP-UCB) algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.

851 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that there is a tradeoff between having good locality and the ability to correct erasures beyond the minimum distance for linear [n,k,d]q codes.
Abstract: Consider a linear [n,k,d]q code C. We say that the ith coordinate of C has locality r , if the value at this coordinate can be recovered from accessing some other r coordinates of C. Data storage applications require codes with small redundancy, low locality for information coordinates, large distance, and low locality for parity coordinates. In this paper, we carry out an in-depth study of the relations between these parameters. We establish a tight bound for the redundancy n-k in terms of the message length, the distance, and the locality of information coordinates. We refer to codes attaining the bound as optimal. We prove some structure theorems about optimal codes, which are particularly strong for small distances. This gives a fairly complete picture of the tradeoffs between codewords length, worst case distance, and locality of information symbols. We then consider the locality of parity check symbols and erasure correction beyond worst case distance for optimal codes. Using our structure theorem, we obtain a tight bound for the locality of parity symbols possible in such codes for a broad class of parameter settings. We prove that there is a tradeoff between having good locality and the ability to correct erasures beyond the minimum distance.

793 citations


Journal ArticleDOI
TL;DR: The per-user channel correlation model requires the development of a novel deterministic equivalent of the empirical Stieltjes transform of large dimensional random matrices with generalized variance profile, and deterministic SINR approximations enable us to solve various practical optimization problems.
Abstract: In this paper, we study the sum rate performance of zero-forcing (ZF) and regularized ZF (RZF) precoding in large MISO broadcast systems under the assumptions of imperfect channel state information at the transmitter and per-user channel transmit correlation. Our analysis assumes that the number of transmit antennas M and the number of single-antenna users K are large while their ratio remains bounded. We derive deterministic approximations of the empirical signal-to-interference plus noise ratio (SINR) at the receivers, which are tight as M, K → ∞. In the course of this derivation, the per-user channel correlation model requires the development of a novel deterministic equivalent of the empirical Stieltjes transform of large dimensional random matrices with generalized variance profile. The deterministic SINR approximations enable us to solve various practical optimization problems. Under sum rate maximization, we derive 1) for RZF the optimal regularization parameter; 2) for ZF the optimal number of users; 3) for ZF and RZF the optimal power allocation scheme; and 4) the optimal amount of feedback in large FDD/TDD multiuser systems. Numerical simulations suggest that the deterministic approximations are accurate even for small M, K.

648 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that in an MIMO broadcast channel with transmit antennas and receivers each with 1 receive antenna, K/1+1/2+···+ 1/K (>;1) degrees of freedom is achievable even when the fed back channel state is completely independent of the current channel state.
Abstract: Transmitter channel state information (CSIT) is crucial for the multiplexing gains offered by advanced interference management techniques such as multiuser multiple-input multiple-output (MIMO) and interference alignment. Such CSIT is usually obtained by feedback from the receivers, but the feedback is subject to delays. The usual approach is to use the fed back information to predict the current channel state and then apply a scheme designed assuming perfect CSIT. When the feedback delay is large compared to the channel coherence time, such a prediction approach completely fails to achieve any multiplexing gain. In this paper, we show that even in this case, the completely stale CSI is still very useful. More concretely, we show that in an MIMO broadcast channel with transmit antennas and receivers each with 1 receive antenna, K/1+1/2+···+1/K (>;1) degrees of freedom is achievable even when the fed back channel state is completely independent of the current channel state. Moreover, we establish that if all receivers have independent and identically distributed channels, then this is the optimal number of degrees of freedom achievable. In the optimal scheme, the transmitter uses the fed back CSI to learn the side information that the receivers receive from previous transmissions rather than to predict the current channel state. Our result can be viewed as the first example of feedback providing a degree-of-freedom gain in memoryless channels.

525 citations


Journal ArticleDOI
TL;DR: This paper proves consistency (all sensors reach consensus almost surely and converge to the true parameter value), efficiency, and asymptotic unbiasedness, and provides convergence rate guarantees in distributed static parameter (vector) estimation in sensor networks with nonlinear observation models and noisy intersensor communication.
Abstract: The paper studies distributed static parameter (vector) estimation in sensor networks with nonlinear observation models and noisy intersensor communication. It introduces separably estimable observation models that generalize the observability condition in linear centralized estimation to nonlinear distributed estimation. It studies two distributed estimation algorithms in separably estimable models, the NU (with its linear counterpart LU) and the NLU. Their update rule combines a consensus step (where each sensor updates the state by weight averaging it with its neighbors' states) and an innovation step (where each sensor processes its local current observation). This makes the three algorithms of the consensus + innovations type, very different from traditional consensus. This paper proves consistency (all sensors reach consensus almost surely and converge to the true parameter value), efficiency, and asymptotic unbiasedness. For LU and NU, it proves asymptotic normality and provides convergence rate guarantees. The three algorithms are characterized by appropriately chosen decaying weight sequences. Algorithms LU and NU are analyzed in the framework of stochastic approximation theory; algorithm NLU exhibits mixed time-scale behavior and biased perturbations, and its analysis requires a different approach that is developed in this paper.

447 citations


Journal ArticleDOI
TL;DR: It is shown that the use of multiple molecules leads to reduced error rate in a manner akin to diversity order in wireless communications, and that the additive inverse Gaussian noise channel model is appropriate for molecular communication in fluid media.
Abstract: In this paper, we consider molecular communication, with information conveyed in the time of release of molecules. These molecules propagate to the transmitter through a fluid medium, propelled by a positive drift velocity and Brownian motion. The main contribution of this paper is the development of a theoretical foundation for such a communication system; specifically, the additive inverse Gaussian noise (AIGN) channel model. In such a channel, the information is corrupted by noise that follows an IG distribution. We show that such a channel model is appropriate for molecular communication in fluid media. Taking advantage of the available literature on the IG distribution, upper and lower bounds on channel capacity are developed, and a maximum likelihood receiver is derived. Results are presented which suggest that this channel does not have a single quality measure analogous to signal-to-noise ratio in the additive white Gaussian noise channel. It is also shown that the use of multiple molecules leads to reduced error rate in a manner akin to diversity order in wireless communications. Finally, some open problems are discussed that arise from the IG channel model.

429 citations


Journal ArticleDOI
TL;DR: In this paper, an efficient convex optimization-based algorithm that is called outlier pursuit is presented, which under some mild assumptions on the uncorrupted points (satisfied, e.g., by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace and identifies the corrupted points.
Abstract: Singular-value decomposition (SVD) [and principal component analysis (PCA)] is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented sensitivity to outliers. Recent work has considered the setting where each point has a few arbitrarily corrupted components. Yet, in applications of SVD or PCA, such as robust collaborative filtering or bioinformatics, malicious agents, defective genes, or simply corrupted or contaminated experiments may effectively yield entire points that are completely corrupted. We present an efficient convex optimization-based algorithm that we call outlier pursuit, which under some mild assumptions on the uncorrupted points (satisfied, e.g., by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace and identifies the corrupted points. Such identification of corrupted points that do not conform to the low-dimensional approximation is of paramount interest in bioinformatics, financial applications, and beyond. Our techniques involve matrix decomposition using nuclear norm minimization; however, our results, setup, and approach necessarily differ considerably from the existing line of work in matrix completion and matrix decomposition, since we develop an approach to recover the correct column space of the uncorrupted matrix, rather than the exact matrix itself. In any problem where one seeks to recover a structure rather than the exact initial matrices, techniques developed thus far relying on certificates of optimality will fail. We present an important extension of these methods, which allows the treatment of such problems.

388 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the normalized risk of the LASSO converges to a limit, and an explicit expression for this limit was derived for random instances, based on the analysis of AMP.
Abstract: We consider the problem of learning a coefficient vector xο ∈ RN from noisy linear observation y = Axo + ∈ Rn. In many contexts (ranging from model selection to image processing), it is desirable to construct a sparse estimator x. In this case, a popular approach consists in solving an l1-penalized least-squares problem known as the LASSO or basis pursuit denoising. For sequences of matrices A of increasing dimensions, with independent Gaussian entries, we prove that the normalized risk of the LASSO converges to a limit, and we obtain an explicit expression for this limit. Our result is the first rigorous derivation of an explicit formula for the asymptotic mean square error of the LASSO for random instances. The proof technique is based on the analysis of AMP, a recently developed efficient algorithm, that is inspired from graphical model ideas. Simulations on real data matrices suggest that our results can be relevant in a broad array of practical applications.

334 citations


Journal ArticleDOI
TL;DR: The group testing problem is formulated as a channel coding/decoding problem and a single-letter characterization for the total number of tests used to identify the defective set is derived.
Abstract: The fundamental task of group testing is to recover a small distinguished subset of items from a large population while efficiently reducing the total number of tests (measurements). The key contribution of this paper is in adopting a new information-theoretic perspective on group testing problems. We formulate the group testing problem as a channel coding/decoding problem and derive a single-letter characterization for the total number of tests used to identify the defective set. Although the focus of this paper is primarily on group testing, our main result is generally applicable to other compressive sensing models.

325 citations


Journal ArticleDOI
TL;DR: An explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, and an ability of the code to perform repair through mere transfer of data as repair by transfer are named.
Abstract: Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of nodes within the -node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when . This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as “helper node pooling,” and show that it is the necessity to satisfy such scenarios that overconstrains the system.

Journal ArticleDOI
TL;DR: A system in which the average recharge rate is time varying in a larger time scale is considered and the optimal offline power policy is derived that maximizes the average throughput, by using majorization theory.
Abstract: In energy harvesting communication systems, an exogenous recharge process supplies energy necessary for data transmission and the arriving energy can be buffered in a battery before consumption. We determine the information-theoretic capacity of the classical additive white Gaussian noise (AWGN) channel with an energy harvesting transmitter with an unlimited sized battery. As the energy arrives randomly and can be saved in the battery, codewords must obey cumulative stochastic energy constraints. We show that the capacity of the AWGN channel with such stochastic channel input constraints is equal to the capacity with an average power constraint equal to the average recharge rate. We provide two capacity achieving schemes: save-and-transmit and best-effort-transmit. In the save-and-transmit scheme, the transmitter collects energy in a saving phase of proper duration that guarantees that there will be no energy shortages during the transmission of code symbols. In the best-effort-transmit scheme, the transmission starts right away without an initial saving period, and the transmitter sends a code symbol if there is sufficient energy in the battery, and a zero symbol otherwise. Finally, we consider a system in which the average recharge rate is time varying in a larger time scale and derive the optimal offline power policy that maximizes the average throughput, by using majorization theory.

Journal ArticleDOI
TL;DR: A new simplified downlink scheduling scheme that preselects the users according to probabilities obtained from the large-system results, depending on the desired fairness criterion is proposed, performing close to the optimal (finite-dimensional) opportunistic user selection while requiring significantly less channel state feedback, since only a small fraction of preselected users must feed back their channel state information.
Abstract: We consider the downlink of a multicell system with multiantenna base stations and single-antenna user terminals, arbitrary base station cooperation clusters, distance-dependent propagation pathloss, and general “fairness” requirements. Base stations in the same cooperation cluster employ joint transmission with linear zero-forcing beamforming, subject to sum or per-base station power constraints. Intercluster interference is treated as noise at the user terminals. Analytic expressions for the system spectral efficiency are found in the large-system limit where both the numbers of users and antennas per base station tend to infinity with a given ratio. In particular, for the per-base station power constraint, we find new results in random matrix theory, yielding the squared Frobenius norm of submatrices of the Moore-Penrose pseudo-inverse for the structured non-i.i.d. channel matrix resulting from the cooperation cluster, user distribution, and path-loss coefficients. The analysis is extended to the case of nonideal Channel State Information at the Transmitters obtained through explicit downlink channel training and uplink feedback. Specifically, our results illuminate the trade-off between the benefit of a larger number of cooperating antennas and the cost of estimating higher-dimensional channel vectors. Furthermore, our analysis leads to a new simplified downlink scheduling scheme that preselects the users according to probabilities obtained from the large-system results, depending on the desired fairness criterion. The proposed scheme performs close to the optimal (finite-dimensional) opportunistic user selection while requiring significantly less channel state feedback, since only a small fraction of preselected users must feed back their channel state information.

Journal ArticleDOI
TL;DR: This paper revisits the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements and demonstrates that the rank aware techniques are significantly better than existing methods in dealing with multiple measurements.
Abstract: This paper revisits the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem is an extension of single channel sparse recovery, which lies at the heart of compressed sensing. Inspired by the links to array signal processing, a new family of MMV algorithms is considered that highlight the role of rank in determining the difficulty of the MMV recovery problem. The simplest such method is a discrete version of MUSIC which is guaranteed to recover the sparse vectors in the full rank MMV setting, under mild conditions. This idea is extended to a rank aware pursuit algorithm that naturally reduces to Order Recursive Matching Pursuit (ORMP) in the single measurement case while also providing guaranteed recovery in the full rank setting. In contrast, popular MMV methods such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques are shown to be rank blind in terms of worst case analysis. Numerical simulations demonstrate that the rank aware techniques are significantly better than existing methods in dealing with multiple measurements.

Journal ArticleDOI
TL;DR: In this paper, the degree-of-freedom (DoF) regions for the MIMO broadcast channel (BC), interference channels (ICs), including X and multihop ICs, and the cognitive radio channel (CRC), when there is no channel state information at the transmitter(s) (CSIT) and for fading distributions in which transmit directions are statistically indistinguishable.
Abstract: The degree-of-freedom (DoF) regions are characterized for the multiple-input multiple-output (MIMO) broadcast channel (BC), interference channels (ICs), including X and multihop ICs, and the cognitive radio channel (CRC), when there is no channel state information at the transmitter(s) (CSIT) and for fading distributions in which transmit directions are statistically indistinguishable. For the K-user MIMO BC, the exact DoF region is obtained, which shows that time division is DoF-region optimal. For the two-user MIMO IC and CRC, inner and outer bounds are obtained that coincide for a vast majority of the relative numbers of antennas at the four terminals. Finally, the DoF of the K-user MIMO IC, the CRC, and X networks are obtained for certain classes of these networks. The results herein are derived for fading distributions and additive noises that are more general than those considered in other simultaneous related works. The DoF with and without CSIT are compared and conditions under which a lack of CSIT does, or does not, result in the loss of DoF are identified, thereby 1) providing robust no-CSIT schemes that have the same DoF as their previously found CSIT counterparts and 2) identifying situations where CSI feedback to transmitters would provide gains that are significant enough that even the DoF could be improved.

Journal ArticleDOI
TL;DR: The constructions presented in this paper are the first explicit constructions of regenerating codes that achieve the cut-set bound, and Interference alignment is a theme that runs throughout the paper.
Abstract: Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any arbitrary k of n nodes. However regenerating codes possess in addition, the ability to repair a failed node by connecting to any arbitrary d nodes and downloading an amount of data that is typically far less than the size of the data file. This amount of download is termed the repair bandwidth. Minimum storage regenerating (MSR) codes are a subclass of regenerating codes that require the least amount of network storage; every such code is a maximum distance separable (MDS) code. Further, when a replacement node stores data identical to that in the failed node, the repair is termed as exact. The four principal results of the paper are (a) the explicit construction of a class of MDS codes for d = n - 1 ≥ 2k - 1 termed the MISER code, that achieves the cut-set bound on the repair bandwidth for the exact repair of systematic nodes, (b) proof of the necessity of interference alignment in exact-repair MSR codes, (c) a proof showing the impossibility of constructing linear, exact-repair MSR codes for d <; 2k - 3 in the absence of symbol extension, and (d) the construction, also explicit, of high-rate MSR codes for d = k + 1. Interference alignment (IA) is a theme that runs throughout the paper: the MISER code is built on the principles of IA and IA is also a crucial component to the nonexistence proof for d <; 2k - 3. To the best of our knowledge, the constructions presented in this paper are the first explicit constructions of regenerating codes that achieve the cut-set bound.

Journal ArticleDOI
TL;DR: In this paper, the authors study the recovery conditions of weighted l1 minimization for signal reconstruction from compressed sensing measurements when partial support information is available, and they show that if at least 50% of the (partial) information is accurate, then weighted l 1 minimization is stable and robust under weaker sufficient conditions than the analogous conditions for standard l1 minimizing.
Abstract: We study recovery conditions of weighted l1 minimization for signal reconstruction from compressed sensing measurements when partial support information is available. We show that if at least 50% of the (partial) support information is accurate, then weighted l1 minimization is stable and robust under weaker sufficient conditions than the analogous conditions for standard l1 minimization. Moreover, weighted l1 minimization provides better upper bounds on the reconstruction error in terms of the measurement noise and the compressibility of the signal to be recovered. We illustrate our results with extensive numerical experiments on synthetic data and real audio and video signals.

Journal ArticleDOI
TL;DR: In this paper, the complexity of stochastic convex optimization in an oracle model of computation is studied and a new notion of discrepancy between functions is introduced, which can be used to reduce problems of convex optimisation to statistical parameter estimation, which is lower bounded using information-theoretic methods.
Abstract: Relative to the large literature on upper bounds on complexity of convex optimization, lesser attention has been paid to the fundamental hardn4516420ess of these problems. Given the extensive use of convex optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic convex optimization in an oracle model of computation. We introduce a new notion of discrepancy between functions, and use it to reduce problems of stochastic convex optimization to statistical parameter estimation, which can be lower bounded using information-theoretic methods. Using this approach, we improve upon known results and obtain tight minimax complexity estimates for various function classes.

Journal ArticleDOI
TL;DR: For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by the standard Gaussian complementary cumulative distribution function.
Abstract: This paper studies the minimum achievable source coding rate as a function of blocklength n and probability ϵ that the distortion exceeds a given level d . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by R(d) + √V(d)/(n) Q-1(ϵ), where R(d) is the rate-distortion function, V(d) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and Q-1(·) is the inverse of the standard Gaussian complementary cumulative distribution function.

Journal ArticleDOI
TL;DR: Subspace-augmented MUSIC (SA-MUSIC) is proposed, which improves on MUSIC such that the support is reliably recovered under such unfavorable conditions and is given in terms of a version of the restricted isometry property.
Abstract: We propose robust and efficient algorithms for the joint sparse recovery problem in compressed sensing, which simultaneously recover the supports of jointly sparse signals from their multiple measurement vectors obtained through a common sensing matrix. In a favorable situation, the unknown matrix, which consists of the jointly sparse signals, has linearly independent nonzero rows. In this case, the MUltiple SIgnal Classification (MUSIC) algorithm, originally proposed by Schmidt for the direction of arrival estimation problem in sensor array processing and later proposed and analyzed for joint sparse recovery by Feng and Bresler, provides a guarantee with the minimum number of measurements. We focus instead on the unfavorable but practically significant case of rank defect or ill-conditioning. This situation arises with a limited number of measurement vectors, or with highly correlated signal components. In this case, MUSIC fails and, in practice, none of the existing methods can consistently approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC), which improves on MUSIC such that the support is reliably recovered under such unfavorable conditions. Combined with a subspace-based greedy algorithm, known as Orthogonal Subspace Matching Pursuit, which is also proposed and analyzed in this paper, SA-MUSIC provides a computationally efficient algorithm with a performance guarantee. The performance guarantees are given in terms of a version of the restricted isometry property. In particular, we also present a non-asymptotic perturbation analysis of the signal subspace estimation step, which has been missing in the previous studies of MUSIC.

Journal ArticleDOI
TL;DR: It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an -dimensional vector “decouples” as scalar postulatedMAP estimators is shown to be correct.
Abstract: The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an -dimensional vector “decouples” as scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdu. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.

Journal ArticleDOI
TL;DR: The structure of LDPC convolutional code ensembles is suitable to obtain performance close to the theoretical limits over the memoryless erasure channel, both for the BP decoder and windowed decoding but the same structure imposes limitations on the performance over erasure channels with memory.
Abstract: We consider a windowed decoding scheme for LDPC convolutional codes that is based on the belief-propagation (BP) algorithm. We discuss the advantages of this decoding scheme and identify certain characteristics of LDPC convolutional code ensembles that exhibit good performance with the windowed decoder. We will consider the performance of these ensembles and codes over erasure channels with and without memory. We show that the structure of LDPC convolutional code ensembles is suitable to obtain performance close to the theoretical limits over the memoryless erasure channel, both for the BP decoder and windowed decoding. However, the same structure imposes limitations on the performance over erasure channels with memory.

Journal ArticleDOI
Jong Min Kim1, Okkyun Lee1, Jong Chul Ye1
TL;DR: A unified approach that revisits the link between CS and array signal processing first unveiled in the mid 1990s and shows that this approach requires a smaller number of sensor elements for accurate support recovery than the existing CS methods and that it can approach the optimal -bound with finite number of snapshots even in cases where the signals are linearly dependent.
Abstract: The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. Even though MMV problems have been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sensing (CS) due to its capability to estimate sparse support even with an insufficient number of snapshots, in which case classical array signal processing fails. However, CS guarantees the accurate recovery in a probabilistic manner, which often shows inferior performance in the regime where the traditional array signal processing approaches succeed. The apparent dichotomy between the probabilistic CS and deterministic sensor array signal processing has not been fully understood. The main contribution of the present article is a unified approach that revisits the link between CS and array signal processing first unveiled in the mid 1990s by Feng and Bresler. The new algorithm, which we call compressive MUSIC, identifies the parts of support using CS, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using a large system MMV model, we show that our compressive MUSIC requires a smaller number of sensor elements for accurate support recovery than the existing CS methods and that it can approach the optimal -bound with finite number of snapshots even in cases where the signals are linearly dependent.

Journal ArticleDOI
TL;DR: A new greedy algorithm which is called the orthogonal super greedy algorithm (OSGA), called OSGA, is built and it is observed that OSGA is times simpler (more efficient) than OMP.
Abstract: The general theory of greedy approximation is well developed. Much less is known about how specific features of a dictionary can be used to our advantage. In this paper, we discuss incoherent dictionaries. We build a new greedy algorithm which is called the orthogonal super greedy algorithm (OSGA). We show that the rates of convergence of OSGA and the orthogonal matching pursuit (OMP) with respect to incoherent dictionaries are the same. Based on the analysis of the number of orthogonal projections and the number of iterations, we observed that OSGA is times simpler (more efficient) than OMP. Greedy approximation is also a fundamental tool for sparse signal recovery. The performance of orthogonal multimatching pursuit, a counterpart of OSGA in the compressed sensing setting, is also analyzed under restricted isometry property conditions.

Journal ArticleDOI
TL;DR: It is shown that, as long as λe/λ = o((logn)-2), almost all of the nodes achieve a perfectly secure rate of Ω(1/√n) for the extended and dense network models.
Abstract: This paper studies the achievable secure rate per source-destination pair in wireless networks. First, a path loss model is considered, where the legitimate and eavesdropper nodes are assumed to be placed according to Poisson point processes with intensities λ and λe, respectively. It is shown that, as long as λe/λ = o((logn)-2), almost all of the nodes achieve a perfectly secure rate of Ω(1/√n) for the extended and dense network models. Therefore, under these assumptions, securing the network does not entail a loss in the per-node throughput. The achievability argument is based on a novel multihop forwarding scheme where randomization is added in every hop to ensure maximal ambiguity at the eavesdropper(s). Second, an ergodic fading model with n source-destination pairs and ne eavesdroppers is considered. Employing the ergodic interference alignment scheme with an appropriate secrecy precoding, each user is shown to achieve a constant positive secret rate for sufficiently large n. Remarkably, the scheme does not require eavesdropper CSI (only the statistical knowledge is assumed) and the secure throughput per node increases as we add more legitimate users to the network in this setting. Finally, the effect of eavesdropper collusion on the performance of the proposed schemes is characterized.

Journal ArticleDOI
TL;DR: This paper develops a new communication strategy, ergodic interference alignment, for the K-user interference channel with time-varying fading, and shows how to generalize this strategy beyond Gaussian channel models.
Abstract: This paper develops a new communication strategy, ergodic interference alignment, for the K-user interference channel with time-varying fading. At any particular time, each receiver will see a superposition of the transmitted signals plus noise. The standard approach to such a scenario results in each transmitter-receiver pair achieving a rate proportional to 1/K its interference-free ergodic capacity. However, given two well-chosen time indices, the channel coefficients from interfering users can be made to exactly cancel. By adding up these two observations, each receiver can obtain its desired signal without any interference. If the channel gains have independent, uniform phases, this technique allows each user to achieve at least 1/2 its interference-free ergodic capacity at any signal-to-noise ratio. Prior interference alignment techniques were only able to attain this performance as the signal-to-noise ratio tended to infinity. Extensions are given for the case where each receiver wants a message from more than one transmitter as well as the “X channel” case (with two receivers) where each transmitter has an independent message for each receiver. Finally, it is shown how to generalize this strategy beyond Gaussian channel models. For a class of finite field interference channels, this approach yields the ergodic capacity region.

Journal ArticleDOI
TL;DR: A comprehensive survey is given on several major systematic approaches in dealing with delay-aware control problems, namely the equivalentrate constraint approach, the Lyapunov stability drift approach, and the approximate Markov decision process approach using stochastic learning.
Abstract: In this paper, a comprehensive survey is given on several major systematic approaches in dealing with delay-aware control problems, namely the equivalentrate constraint approach, the Lyapunov stability drift approach, and the approximate Markov decision process approach using stochastic learning. These approaches essentially embrace most of the existing literature regarding delay-aware resource control in wireless systems. They have their relative pros and cons in terms of performance, complexity, and implementation issues. For each of the approaches, the problem setup, the general solution, and the design methodology are discussed. Applications of these approaches to delay-aware resource allocation are illustrated with examples in single-hop wireless networks. Furthermore, recent results regarding delay-aware multihop routing designs in general multihop networks are elaborated. Finally, the delay performances of various approaches are compared through simulations using an example of the uplink OFDMA systems.

Journal ArticleDOI
TL;DR: In this paper, the information-theoretic limitations of the problem of graph selection for binary Markov random fields under high-dimensional scaling are analyzed. And the necessary and sufficient conditions for correct graph selection over pair-wise binary random fields are derived.
Abstract: The problem of graphical model selection is to estimate the graph structure of a Markov random field given samples from it. We analyze the information-theoretic limitations of the problem of graph selection for binary Markov random fields under high-dimensional scaling, in which the graph size and the number of edges k, and/or the maximal node degree d, are allowed to increase to infinity as a function of the sample size n. For pair-wise binary Markov random fields, we derive both necessary and sufficient conditions for correct graph selection over the class Gp,k of graphs on vertices with at most k edges, and over the class Gp,d of graphs on p vertices with maximum degree at most d. For the class Gp,k, we establish the existence of constants c and c' such that if n ; c' k2 log p. Similarly, for the class Gp,d, we exhibit constants c and c' such that for n ; c' d3 log p.

Journal ArticleDOI
TL;DR: In this article, the authors show that the 2 × 2 ×2 × 2 interference network with concatenation of two two-user interference channels achieves the min-cut outer bound of 2 DoF, for almost all values of channel coefficients.
Abstract: We show that the 2 × 2 × 2 interference network, i.e., the multihop interference network formed by concatenation of two two-user interference channels achieves the min-cut outer bound value of 2 DoF, for almost all values of channel coefficients, for both time-varying or fixed-channel coefficients. The key to this result is a new idea, called aligned interference neutralization, that provides a way to align interference terms over each hop in a manner that allows them to be canceled over the air at the last hop.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the recovery of signals exhibiting a sparse representation in a general (i.e., possibly redundant or incomplete) dictionary that are corrupted by additive noise admitting sparse representations in another general dictionary.
Abstract: We investigate the recovery of signals exhibiting a sparse representation in a general (i.e., possibly redundant or incomplete) dictionary that are corrupted by additive noise admitting a sparse representation in another general dictionary. This setup covers a wide range of applications, such as image inpainting, super-resolution, signal separation, and recovery of signals that are impaired by, e.g., clipping, impulse noise, or narrowband interference. We present deterministic recovery guarantees based on a novel uncertainty relation for pairs of general dictionaries and we provide corresponding practicable recovery algorithms. The recovery guarantees we find depend on the signal and noise sparsity levels, on the coherence parameters of the involved dictionaries, and on the amount of prior knowledge about the signal and noise support sets.