Fast Non-Negative Orthogonal Matching Pursuit
read more
Citations
A Compressed Sensing Approach to Group-testing for COVID-19 Detection
A Compressed Sensing Approach to Pooled RT-PCR Testing for COVID-19 Detection
Manifold learning based data-driven modeling for soft biological tissues.
Non-Negative Orthogonal Greedy Algorithms
An integrated manifold learning approach for high-dimensional data feature extractions and its applications to online process monitoring of additive manufacturing
References
Matching pursuits with time-frequency dictionaries
Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition
Sparse Approximate Solutions to Linear Systems
Sparse Unmixing of Hyperspectral Data
Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis
Related Papers (5)
Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
Frequently Asked Questions (12)
Q2. What is the cost of the CNNOMP?
The CNNOMP of [9] has an internal non-negative least square optimisation step, which has an asymptotic computational complexity of O(LMk2), where k is the iteration number and L is the number of internal iterations [14].
Q3. What is the term of the CNNOMP?
y was generated using a Gaussian-Bernoulli model, i.e. uniformly random support and Normal distribution for the non-zero coefficients.
Q4. What is the conclusion of this part?
The conclusion of this part relies on the fact that L is scaling with the order of K and P is not scaling directly with thedimension of the problem.
Q5. What is the approximation of y?
In the kth iteration, let the best approximation of y, with the non-negative coefficients and using Φk, be∑k i=1 xiφi = ∑k i=1 ziψi.
Q6. What is the cost of the FNNOMP?
Another extra computational cost of FNNOMP is in the sorting of the coefficients, which is O(N log(P )) in the worst case for finding sorted P largest coefficients.
Q7. What is the way to solve k?
To combat such a computational burden, incorporation of a matrix decomposition of the selected sub-dictionary has been proposed, where QR factorisation is among the most effective techniques.
Q8. What is the way to calculate?
the analysis here is based on a dictionary without such a fast multiplication, as it applies to many applications with non-negative sparsity models.
Q9. What is the simplest way to determine if a loop terminates?
As the dictionary has a finite dimension N , the inner loop terminates when a “Terminate” signal occurs or p = N .While there might be worst cases for which the inner-loop has to check all elements before termination, their observation is that this loop terminates after only a few iterations.
Q10. What is the simplest way to solve k?
With some abuse of notation, the authors assume that in iteration k, the columns of Φk are sorted based on the iteration number and φi, for 1 ≤ i ≤ k, is the ith selected atom.
Q11. How can the authors avoid the inversion of matrix R?
The inversion of matrix R, which is a necessary to find x at the end of algorithm [10], can be avoided using an iterative update of R−1.
Q12. What is the way to check that is the last column of R1k?
In this setting, the authors can easily check that γ is the last column of R−1k+1, if φk+1 is the selected atom, i.e.,γ = [− R−1k ν µ ; 1 µ ], (6)where ν and µ are the same as what were defined after (3).