Fast and Efficient Compressive Sensing Using Structurally Random Matrices
read more
Citations
ADMM-CSNet: A Deep Learning Approach for Image Compressive Sensing
A Systematic Review of Compressive Sensing: Concepts, Implementations and Applications
Application of Compressive Sensing in Cognitive Radio Communications: A Survey
A Review of Compressive Sensing in Information Security Field
Exploiting chaos-based compressed sensing and cryptographic algorithm for image encryption and compression
References
Compressed sensing
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case
Decoding by linear programming
Related Papers (5)
Frequently Asked Questions (10)
Q2. What is the purpose of the scale coefficient N M?
The scale coefficient √N M is to normalize the transform so that energy of the measurementvector is almost similar to that of the input signal vector.
Q3. What is the main idea behind the pre-randomization strategy?
The intuition behind this pre-randomization strategy is that scrambling a signal into a white noise-like form enables the sensing process to be independent of the signal’s sparsifying basis.
Q4. What is the condition that each row of F has zero average sum?
The condition that each row of F has zero average sum is to guarantee that entries of FΨ have zero mean while the condition that entries on each row of F and on each column of Ψ are not all equal is to prevent the degenerate case that entries of FΨ might become a deterministic quantity.
Q5. What is the number of measurements needed for exact recovery?
If F is a dense and uniform rather than block-diagonal(e.g. DCT or normalized WHT matrix), the number of measurement needed is on the order of O(K log2(Nδ )).
Q6. What is the main challenge for CS in practice?
One of remaining challenges for CS in practice is to design a CS framework that has the following features:• Optimal or near optimal sensing performance: the number of measurements for exact recovery is almostminimal, i.e. on the order of O(K log N); • Universality: sensing performance is equally good with all sparsifying bases; • Low complexity, fast implementation that can support block-based processing: this is necessary for large-scale,realtime sensing applications;•
Q7. What is the condition that each column of the sparsifying basis has zero average sum?
the condition that absolute average sum of every column of the sparsifying basis Ψ are on the order of o( 1√ N ) is also close to the reality because the majority of columns of the sparsifying basis Ψ can be roughly viewed as bandpass and highpass filters whose average sum of the coefficients are always zero.
Q8. What is the way to recover the signal from a small subset of measurements?
Commonly speaking, this is good for signal recovery from a small subset of measurements because if energy of some transform coefficients were concentrated in few measurements that happens to be bypassed in the sampling process, there is no hope for exact signal recovery even when employing the most sophisticated reconstruction method.
Q9. What is the difference between a SRM and a random sensing matrix?
With compressible signals (e.g., images), the number of measurements acquired tends to be proportional with the signal dimension, for example, M = N/4, then computational complexity reduction if using SRM is N4 log N .Table III summarizes practical advantages of employing a SRM over a random sensing matrix.
Q10. How many times do the authors count the probability of exact recovery?
For each value of sparsity K ∈ {10, 20, 30, 40, 50, 60}, the authors repeat the experiment 500 times and count the probability of exact recovery.