Message passing algorithms for compressed sensing: I. motivation and construction
read more
Citations
The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing
Sparse Signal Processing for Grant-Free Massive Connectivity: A Future Paradigm for Random Access Protocols in the Internet of Things
AMP-Inspired Deep Networks for Sparse Linear Inverse Problems
Expectation-Maximization Gaussian-Mixture Approximate Message Passing
The Noise-Sensitivity Phase Transition in Compressed Sensing
References
Atomic Decomposition by Basis Pursuit
Message-passing algorithms for compressed sensing
Modern Coding Theory
Constructing free-energy approximations and generalized belief propagation algorithms
Solution of 'Solvable model of a spin glass'
Related Papers (5)
Frequently Asked Questions (8)
Q2. What is the limiting value of the CDF F?
The authors say that HFP(Ψ) is a stable fixed point if 0 ≤ SC(Ψ) < 1. Let µ2(F ) = ∫ x2dF denote the second-moment functional of the CDF F .
Q3. What is the limiting value of an AMP-type algorithm?
When State Evolution is correct for an AMP-type algorithm, the authors can be sure that the algorithm converges rapidly to its limiting value – exponentially fast.
Q4. What is the way to predict the evolution of a fixed point?
State Evolution predicts this phenomenon, because, for < ρSE(δ) ·δ, the highest fixed point is at σ2t = 0, while above this value, the highest fixed point is at σ2t >
Q5. What is the common example of soft thresholding?
Further δ ≡ n/N and {ηt( · )}t≥0 is a sequence of scalar non-linearities (see Section III), a typical example being soft thresholding, which contracts its argument towards zero.
Q6. What is the definition of large-system limit?
Then for any bounded continuous function ζ : R4 7→ R of the real 4-tuples (s, u, w, x), and any number of iterations t,1) The large-system limit ls.lim(ζ, t,A) exists for the observable ζ at iteration t. 2) This limit coincides with the expectation E(ζ|St) computed at state St.State evolution, where correct, allows us to predict the performance of AMP algorithms and tune them for optimal performance.
Q7. What is the way to solve the 1-minimization problem?
While formally it can be solved by linear programming, standard linear program codes are far too slow for many of the applications interesting to us.
Q8. What is the effective variance of the AMP algorithm?
The authors define the effective varianceσ(xt)2 ≡ v + 1 Nδ ||xt − s0||22 . (4)The effective variance combines the observational variance v with an additional term 1Nδ ||xt − s0|| 2 2 that the authors call the interference term.