Fast L1-Minimization Algorithms For Robust Face Recognition
Citations
925 citations
871 citations
Cites methods from "Fast L1-Minimization Algorithms For..."
...One can also adapt fast `1 solvers for sparse coding [96], [97] rather than using greedy orthogonal matching pursuit algorithms....
[...]
337 citations
313 citations
Cites methods from "Fast L1-Minimization Algorithms For..."
...In the experiment, we used the efficient augmented Lagrangian method (ALM) [40] to solve the original SRC model....
[...]
...We denote by SRC-p, q the SRC method with 0 q = p 1, and embed the proposed GISA into ALM to implement SRC-p, q for robust face recognition....
[...]
...Then, by simply replacing the soft-thresholding operator in ALM by the proposed GST operator, we can embed the proposed GISA algorithm into the ALM method for solving the SRC model with arbitrary values of p and q....
[...]
251 citations
References
[...]
33,341 citations
13,656 citations
"Fast L1-Minimization Algorithms For..." refers background in this paper
...s impossible to discuss and compare all the existing methods in a single paper. Methods that are not discussed in this paper include GPSR [24], SpaRSA [25], SPGL1 [26], NESTA [15], SALSA [27], GLMNET [28], and Bregman iterative algorithm [29], just to name a few. Nevertheless, vast majority of the existing algorithms are variants of those benchmarked in this paper, and share many common properties wit...
[...]
12,671 citations
"Fast L1-Minimization Algorithms For..." refers background in this paper
...angian of (30) given by L ˘(x;) = g(x) + ˘ 2 kh(x)k2 2 + Th(x); (31) where 2Rm is a vector of Lagrange multipliers. L ˘(;) is called the augmented Lagrangian function of (1). It has been shown in [64] that there exists m2R (not necessarily unique) and ˘ 2R such that x = argmin x L ˘(x;) 8˘>˘ : (32) Thus, it is possible to find the optimal solution to (P 1) by minimizing the augmented Lagrangi...
[...]
...and , respectively, provided that f kg is a bounded sequence and f˘ kgis sufficiently large after a certain index. Furthermore, the convergence rate is linear as long as ˘>˘, and superlinear if !1[64]. Here, we point out that the choice of f˘ kgis problemdependent. As shown in [64], increasing ˘ k increases the ill-conditionness or difficulty of minimizing L ˘ k (x; k), and the degree of difficulty...
[...]
11,674 citations
11,413 citations
"Fast L1-Minimization Algorithms For..." refers background or methods in this paper
...it at the expense of increasing the number of iterations as compared to the interior-point methods. Here we consider four most visible algorithms in recent years, namely, proximal-point methods [20], [21], [15], parallel coordinate descent (PCD) [22], approximate message passing (AMP) [17], and templates for convex cone solvers (TFOCS) [23]. Before proceeding, we first introduce the proximal operator o...
[...]
...celeration techniques for ‘ 1-min problems, which include two classical solutions using interior-point method and Homotopy method, and several first-order methods including proximalpoint methods [20], [21], [15], parallel coordinate descent (PCD) [22], approximate message passing (AMP) [17], and templates for convex cone solvers (TFOCS) [23]. To set up the stage for a fair comparison and help the reade...
[...]
... [58]. While the above methods enjoy a much lower computation complexity per iteration, in practice people have observed that it converges quite slowly in terms of the number of iterations. Recently, [21] proposes a fast iterative soft-thresholding algorithm (FISTA), which has a significantly better convergence rate. The key idea behind FISTA is that, instead of forming a quadratic approximation of F( ...
[...]
...nce behavior of the above scheme depends on the choice of k. For example, the popular iterative soft-thresholding algorithm (ISTA) [54], [55], [56], [25] employs a fixed choice of k related to L f. In [21], assuming k = L f, one can show that ISTA has a sublinear convergence rate that is no worse than O(1=k): F(x k) F(x) L fkx 0 2xk 2k ; 8k: (17) Meanwhile, an alternative way of determining kat each ...
[...]
...;2) in [15], yielding the so-called Nesterov’s algorithm (NESTA).6 Both algorithms enjoy the same non-asymptotic convergence rate of O(1=k2) in the ‘ 1-min setting. The interested reader may refer to [21] for a proof of the above result, which extends the original algorithm of Nesterov [59] devised only for smooth functions that are everywhere Lipschitz continuous. B. Parallel Coordinate Descent Algor...
[...]