A Singular Value Thresholding Algorithm for Matrix Completion
read more
Citations
Robust principal component analysis
Exact Matrix Completion via Convex Optimization
Proximal Algorithms
Robust Recovery of Subspace Structures by Low-Rank Representation
A collaborative framework for 3D alignment and classification of heterogeneous subvolumes in cryo-electron tomography
References
Convex Optimization
Compressed sensing
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
Decoding by linear programming
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
Related Papers (5)
Frequently Asked Questions (12)
Q2. How many matrices of rank 10 can be recovered in 17 minutes?
The algorithm also recovers 30,000× 30,000 matrices of rank 10 from about 0.4% of their sampled entries in just about 17 minutes.
Q3. How does the proposed algorithm solve a matrix completion problem?
Their numerical experiments demonstrate that the proposed algorithm can solve problems, in Matlab, involving matrices of size 30,000 × 30,000 having close to a billion unknowns in 17 minutes on a standard desktop computer with a 1.86 GHz CPU (dual core with Matlab’s multithreading option enabled) and 3 GB of memory.
Q4. What is the key property of shrink(Y)?
In (1.5), shrink(Y , τ) is a nonlinear function which applies a soft-thresholding rule at level τ to the singular values of the input matrix; see section 2 for details.
Q5. How many times faster would a Fortran implementation run?
Despite this significant speedup, the authors have used only the Matlab version, but since the singular value shrinkage operator is by and large the dominant cost in the SVT algorithm, the authors expect that a Fortran implementation would run about 3 to 4 times faster.
Q6. What is the way to approximate the geodesic distance matrix?
With geodesic distances, however, a numerical test suggests that the geodesic-distance matrix M can be well approximated by a low-rank matrix.
Q7. What is the way to evaluate D (Y)?
When this approximation consists of the truncated SVD retaining the part of the expansion corresponding to singular values greater than τ , this can be used to evaluate Dτ (Y ).
Q8. What is the simplest way to find a sparse vector of singular values?
Just as iterative soft-thresholding methods are designed to find sparse solutions, their iterative singular value thresholding scheme is designed to find a sparse vector of singular values.
Q9. How does the sequence Xk obtain the unique solution of (2.8)?
In particular, the sequence {Xk} obtained via (2.7) converges to the unique solution of (2.8) provided that 0 < inf δk ≤ sup δk < 2.4.2.
Q10. How many times do the authors need to rerun the Lanczos iterations?
The authors note that it is not necessary to rerun the Lanczos iterations for the first sk vectors since they have already been computed; only a few new singular values ( of them) need to be numerically evaluated.
Q11. What is the premise that the unknown has low rank?
Having said this, the premise that the unknown has (approximately) low rank radically changes the problem, making the search for solutions feasible since the lowest-rank solution now tends to be the right one.
Q12. How can the authors recover low-rank matrices from a sample of the same data?
they proved that most low-rank matrices can be recovered exactly from most sets of sampled entries even though these sets have surprisingly small cardinality, and more importantly, they proved that this can be done by solving a simple convex optimization problem.