scispace - formally typeset
Open AccessJournal ArticleDOI

Capacity-achieving Sparse Superposition Codes via Approximate Message Passing Decoding

TLDR
In this article, an approximate message passing decoder for sparse superposition codes was proposed, whose decoding complexity scales linearly with the size of the design matrix, and it was shown to asymptotically achieve the AWGN capacity with an appropriate power allocation.
Abstract
Sparse superposition codes were recently introduced by Barron and Joseph for reliable communication over the AWGN channel at rates approaching the channel capacity. The codebook is defined in terms of a Gaussian design matrix, and codewords are sparse linear combinations of columns of the matrix. In this paper, we propose an approximate message passing decoder for sparse superposition codes, whose decoding complexity scales linearly with the size of the design matrix. The performance of the decoder is rigorously analyzed and it is shown to asymptotically achieve the AWGN capacity with an appropriate power allocation. Simulation results are provided to demonstrate the performance of the decoder at finite blocklengths. We introduce a power allocation scheme to improve the empirical performance, and demonstrate how the decoding complexity can be significantly reduced by using Hadamard design matrices.

read more

Citations
More filters
Journal ArticleDOI

Machine Learning at the Wireless Edge: Distributed Stochastic Gradient Descent Over-the-Air

TL;DR: This work introduces a novel analog scheme, called A-DSGD, which exploits the additive nature of the wireless MAC for over-the-air gradient computation, and provides convergence analysis for this approach.
Journal ArticleDOI

Statistical physics of inference: Thresholds and algorithms

TL;DR: In this paper, the authors provide a pedagogical review of the current state-of-the-art algorithms for the planted spin glass problem, with a focus on the Ising model.
Posted Content

SPARCs for Unsourced Random Access

TL;DR: Finite blocklength simulations show that the combination of AMP decoding, with suitable approximations, together with an outer code recently proposed by Amalladinne et.
Proceedings Article

Optimal Errors and Phase Transitions in High-Dimensional Generalized Linear Models

TL;DR: In this article, the mutual information (or free entropy) from which the Bayes-optimal estimation and generalization errors of generalized linear models (GLMs) are deduced is analyzed.
Proceedings ArticleDOI

SPARCs and AMP for Unsourced Random Access

TL;DR: Finite blocklength simulations show that the combination of AMP decoding, with suitable approximations, together with an outer code recently proposed by Amalladinne et.
References
More filters
Book

Compressed sensing

TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Journal ArticleDOI

An Introduction To Compressive Sampling

TL;DR: The theory of compressive sampling, also known as compressed sensing or CS, is surveyed, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition.
Journal ArticleDOI

Decoding by linear programming

TL;DR: F can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program) and numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted.
Posted Content

Decoding by Linear Programming

TL;DR: In this paper, it was shown that under suitable conditions on the coding matrix, the input vector can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program).
Journal ArticleDOI

Message-passing algorithms for compressed sensing

TL;DR: A simple costless modification to iterative thresholding is introduced making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures, inspired by belief propagation in graphical models.
Related Papers (5)