Analysis K-SVD: A Dictionary-Learning Algorithm for the Analysis Sparse Model
read more
Citations
High Capacity Reversible Data Hiding in Encrypted Images by Patch-Level Sparse Representation
Multi-Scale Patch-Based Image Restoration
Joint Convolutional Analysis and Synthesis Sparse Representation for Single Image Layer Separation
Deep dictionary learning
Greedy-like algorithms for the cosparse analysis model
References
Atomic Decomposition by Basis Pursuit
Matching pursuits with time-frequency dictionaries
$rm K$ -SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering
Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries
Related Papers (5)
Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries
Frequently Asked Questions (13)
Q2. What future works have the authors mentioned in the paper "Analysis k-svd: a dictionary-learning algorithm for the analysis sparse model" ?
Their work gives rise to several questions which are left open for future research. Further work is required to reveal additional dictionary properties and design efficient algorithms for encouraging these properties. In this work the authors have seen evidence that having linear dependencies between sets of rows in the dictionary can improve the recovery quality of pursuit algorithms. Having strong linear dependencies is only one desired property of the analysis dictionary and their experiments on natural images imply that other properties, such as ROPP, can be very useful as well.
Q3. What is the co-sparsity of the analysis model?
The co-sparsity ` of the analysis model is defined as the number of zeros in the vector Ωx,‖Ωx‖0 = p− `. (1)In the synthesis model the representation α is obtained by a complex and non-linear pursuit process that seeks (or approximates) the sparsest solution to the linear system of equations Dα = x.
Q4. What is the training set used by the FoE approach?
The training set used by the FoE approach is a large database of image regions (each consisting of a set of overlapping patches) and the learning algorithm runs “offline” resulting in one generic prior that will be suitable for any natural image.
Q5. What is the main goal of the work in [16]?
The work in [16] proposes to learn Ω such that it optimizes the denoising performance on a given set of example pairs (clean and noisy versions of example signals).
Q6. What is the purpose of this paper?
In this paper the authors focus on the analysis model and more specifically, on the development of an algorithm that would learn the analysis dictionary Ω from a set of signal examples X = [x1, x2, . . . , xR].
Q7. What is the ieee's approach to the constraint on xi?
Adopting the approach taken by the K-SVD, the authors maintain the constraint on Ωxi by constraining each xi to remain orthogonal to the rows in Ω it has been found to already be orthogonal to.
Q8. How can the authors replace the provisional steps in Algorithm 2?
Using Equation (11) the authors get that the provisional steps (lines 6–8) in Algorithm 2 can be replaced by computing q(k)i for every k /∈ Λi−1 using (8) and the eventual “Sweep” step can be replaced byk̂i :=
Q9. What are the popular techniques for learning the analysis dictionary?
Referring specifically to the last point of dictionary learning, two popular techniques for this task are the MOD and K-SVD algorithms [3]–[5], whose deployment has led to state-of-theart results in various image processing applications [2]
Q10. What is the co-support of a signal x?
The signal x is thus characterized by its co-support, which determines the subspace it is orthogonal to, and consequently the complement space to which it belongs.
Q11. What is the simplest way to generate a random analysis signal?
Generating a random analysis signal amounts to the following process: Choose a set of row indices Λ ⊆ {1, . . . , p} — this will be the signal’s co-support.
Q12. What is the proposed algorithm for learning the analysis atoms?
For each row, the proposed algorithm thus alternates between the computation of this row from the current subset of chosen examples, and an update of this subset to reject outlier signals.
Q13. What is the ieee's view on the update of wj?
The authors note that, similar to the K-SVD algorithm [4], the update of wj should be affected only by those columns of X̂ that are orthogonal to it, while the remaining signal examples should have no influence.