scispace - formally typeset
Proceedings ArticleDOI

Joint Sparse Recovery Using Deep Unfolding With Application to Massive Random Access

Reads0
Chats0
TLDR
Numerical results for a massive random access system show that the proposed modification to the MMV-ADM method and deep unfolding provide significant improvement in convergence and estimation performance.
Abstract
We propose a learning-based joint sparse recovery method for the multiple measurement vector (MMV) problem using deep unfolding. We unfold an iterative alternating direction method of multipliers (ADM) algorithm for MMV joint sparse recovery algorithm into a trainable deep network. This ADM algorithm is first obtained by modifying the squared error penalty function of an existing ADM algorithm to a back-projected squared error penalty function. Numerical results for a massive random access system show that our proposed modification to the MMV-ADM method and deep unfolding provide significant improvement in convergence and estimation performance.

read more

Citations
More filters
Proceedings ArticleDOI

FASURA: A Scheme for Quasi-Static Massive MIMO Unsourced Random Access Channels

TL;DR: This article considers the massive MIMO unsourced random access problem on a quasi-static Rayleigh fading channel and shows that an appropriate combination of these ideas can substantially outperform state-of-the-art coding schemes when the number of active users is more than 100, making this the best performing scheme known for this regime.
Posted Content

On the Convergence Rate of Projected Gradient Descent for a Back-Projection based Objective

TL;DR: The convergence rate of the BP objective for the projected gradient descent (PGD) algorithm is examined and an inherent source is identified for its faster convergence compared to the least squares objective.
Posted Content

ADMM-DAD net: a deep unfolding network for analysis compressed sensing.

TL;DR: In this article, a new deep unfolding neural network based on the ADMM algorithm for analysis compressed sensing is proposed, which jointly learns a redundant analysis operator for sparsification and reconstructs the signal of interest.
Proceedings ArticleDOI

ADMM-DAD Net: A Deep Unfolding Network for Analysis Compressed Sensing

TL;DR: In this paper , a new deep unfolding neural network based on the ADMM algorithm for analysis compressed sensing is proposed, which jointly learns a redundant analysis operator for sparsification and reconstructs the signal of interest.
Journal ArticleDOI

Model-Based Deep Learning for Joint Activity Detection and Channel Estimation in Massive and Sporadic Connectivity

TL;DR: Two model-based neural network architectures purposed for sporadic user detection and channel estimation in massive machine-type communications are presented, trained on synthetic data generated from the block-fading millimeter-wave multiple access channel model, and are potentially a boon to cell-free MIMO systems.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

Learning Fast Approximations of Sparse Coding

TL;DR: Two versions of a very fast algorithm that produces approximate estimates of the sparse code that can be used to compute good visual features, or to initialize exact iterative algorithms are proposed.
Journal ArticleDOI

Massive Connectivity With Massive MIMO—Part I: Device Activity Detection and Channel Estimation

TL;DR: It is shown that in the asymptotic massive multiple-input multiple-output regime, both the missed device detection and the false alarm probabilities for activity detection can always be made to go to zero by utilizing compressed sensing techniques that exploit sparsity in the user activity pattern.
Journal ArticleDOI

AMP-Inspired Deep Networks for Sparse Linear Inverse Problems

TL;DR: This paper proposes two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: through Onsager correction.
Posted Content

Deep Unfolding: Model-Based Inspiration of Novel Deep Architectures

TL;DR: This work starts with a model-based approach and an associated inference algorithm, and folds the inference iterations as layers in a deep network, and shows how this framework allows to interpret conventional networks as mean-field inference in Markov random fields, and to obtain new architectures by instead using belief propagation as the inference algorithm.
Related Papers (5)