A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding
read more
Citations
Rank Minimization for Snapshot Compressive Imaging
Weighted Schatten $p$ -Norm Minimization for Image Denoising and Background Subtraction
Weighted Schatten $p$-Norm Minimization for Image Denoising and Background Subtraction
Outlier-robust extreme learning machine for regression problems
Hyper-Laplacian Regularized Unidirectional Low-Rank Tensor Recovery for Multispectral Image Denoising
References
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
Robust Face Recognition via Sparse Representation
De-noising by soft-thresholding
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
Related Papers (5)
A Singular Value Thresholding Algorithm for Matrix Completion
Frequently Asked Questions (14)
Q2. What is the main purpose of this paper?
Inspired by the great success of soft thresholding [16] and iterative shrinkage/thresholding (IST) [15] methods, in this paper, the authors propose a generalized iterated shrinkage algorithm (GISA) for ℓp-norm non-convex sparse coding.
Q3. What is the lp-norm non-convex coding problem?
To use IRLS for ℓp-norm non-convex sparse coding, the problem in Eq. (3) is approximated by [26]min x 1 2 ∥y − Ax∥ 2 2 + λ ∑ i (x2i + ε) p/2−1 x2i , (5)where ε → 0 is a small positive number to avoid division by zeros.
Q4. what is the spectral norm of the matrix A?
The proposed GISA is an iterative algorithm, and in each iteration it involves a gradient descent step based on A or y, followed by a generalized shrinkage/thresholding step:x(k+1) = TGS Tp (x (k) − ∥A∥−2AT (Ax − y); ∥A∥−2λ), (35)where ∥A∥ denotes the spectral norm of the matrix A.
Q5. What is the typical parameter setting for the SRC?
In the original SRC, the typical parameter setting is q = 2 and p = 1 (for FR without corruption) or q = 1 and p = 1 (for robust FR with corruption).
Q6. What is the regularization parameter of the gradient operator?
∥Dx∥ p p , (37)where λ is the regularization parameter, D = [Dh,Dv] denotes the gradient operator, and Dh and Dv are the horizontal and vertical gradient operators, respectively.
Q7. What is the fidelity term of the model?
A typical image deconvolution model usually includes a fidelity term and a regularization term, where the fidelity term is modeled based on the degradation process, and the regularization term is modeled based on image priors.
Q8. What is the effect of p on the recognition accuracy of the SRC model?
By fixing q = 2 and varying p, in the first FR experiment the authors use the extended Yale B dataset [22, 27] to test the influence of p on recognition accuracy.
Q9. How many images are used in the experiments?
In their experiments, the authors randomly select 30 images from each subject to construct a training dataset of 1, 140 images, and use the remaining images for test.
Q10. What is the simplest way to model the marginal distributions of filtering responses?
Recent studies on natural image statistics have shown that the marginal distributions of filtering responses can be modeledas hyper-Laplacian with 0 < p < 1 [25, 28, 35], which had been adopted in many low level vision problems [13, 36].
Q11. What is the thresholding function in Eq. (4)?
To guarantee that f (x) has a minimum in (x(λ,p)0 ,+∞), the authors shouldfurther require f ′(x(λ,p)0 ) ≤ 0. In [33], She let f ′(x (λ,p) 0 ) = 0 and solved the following equationf ′(x(λ,p)0 )= (λp(1−p)) 1 2−p −τIT Mp (λ)+λp(λp(1−p)) p−1 2−p =0.(18) The corresponding threshold on y isτIT Mp (λ) = λ 1/(2−p)(2 − p)[p/(1 − p)1−p]1/(2−p). (19)In ITM, She [33] extended the soft-thresholding with the thresholding function in Eq. (11).
Q12. How did the authors solve the lpminimization problem in Eq. (4)?
Inspired by soft-thresholding, the authors proposed a generalized shrankage/thresholding operator to solve the ℓpminimization problem in Eq. (4) by modifying the thresholding and the shrinkage rules.
Q13. What is the current estimation of x(k)?
Given the current estimation x(k), IRLS iteratively solves the following problemmin x 1 2 ∥y − Ax∥2 2 + ∑ i wix2i , (6)and updates x by x(k+1) = ( AT A + diag (w) ) AT y, (7)where the ith component of weight vector w is defined as wi = pλ /( (x(k)i ) 2 + ε )1−p/2 .
Q14. What is the way to implement SRC-p?
The authors denote by SRC-p, q the SRC method with0 < q = p < 1, and embed the proposed GISA into ALM to implement SRC-p, q for robust face recognition.