scispace - formally typeset
Proceedings ArticleDOI

A simpler approach to weighted ℓ 1 minimization

TLDR
The performance of weighted ℓ1 minimization over a non-uniform sparse signal model is analyzed by extending the “Gaussian width” analysis proposed in [1] by providing a heuristic for estimating the optimal weights.
Abstract
In this paper, we analyze the performance of weighted l 1 minimization over a non-uniform sparse signal model by extending the “Gaussian width” analysis proposed in [1]. Our results are consistent with those of [7] which are currently the best known ones. However, our methods are less computationally intensive and can be easily extended to signals which have more than two sparsity classes. Finally, we also provide a heuristic for estimating the optimal weights, building on a more general model presented in [11]. Our results reinforce the fact that weighted l 1 minimization is substantially better than regular l 1 minimization and provide an easy way to calculate the optimal weights.

read more

Citations
More filters
Posted Content

Weighted $\ell_1$-Minimization for Sparse Recovery under Arbitrary Prior Information

TL;DR: This work studies the recovery conditions and the associated recovery guarantees of weighted $\ell_1$-minimization when arbitrarily many distinct weights are permitted, and includes numerical experiments that demonstrate the benefits of allowing non-uniform weights in the reconstruction procedure.
Proceedings ArticleDOI

Recovery threshold for optimal weight ℓ 1 minimization

TL;DR: This paper gives a simple closed form expression for the minimum number of measurements required for successful recovery when the optimal weights are chosen and shows that the number of measured is upper bounded by the sum of the minimum numbers needed had the authors measured the S1 and S2 components of the signal separately.
Proceedings Article

Weighted Graph Clustering with Non-Uniform Uncertainties

TL;DR: This work proposes a clustering algorithm that is based on optimizing an appropriate weighted objective, where larger weights are given to observations with lower uncertainty, and derives a provably optimal weighting scheme that applies to any given weight and distribution for the uncertainty.
Proceedings ArticleDOI

Rewighted L1-minimization for sparse solutions to underdetermined linear systems

TL;DR: The numerical experiential results demonstrate that the new method outperforms in successful probabilities, compared with classical l1 -minimization and other iteratively reweighted l1-algorithms.
References
More filters
Journal ArticleDOI

Decoding by linear programming

TL;DR: F can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program) and numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted.
Posted Content

Decoding by Linear Programming

TL;DR: In this paper, it was shown that under suitable conditions on the coding matrix, the input vector can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program).
Journal ArticleDOI

The Convex Geometry of Linear Inverse Problems

TL;DR: This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems.
Journal ArticleDOI

On sparse reconstruction from Fourier and Gaussian measurements

TL;DR: This paper improves upon best‐known guarantees for exact reconstruction of a sparse signal f from a small universal sample of Fourier measurements by showing that there exists a set of frequencies Ω such that one can exactly reconstruct every r‐sparse signal f of length n from its frequencies in Ω, using the convex relaxation.
Book

Sparse nonnegative solution of underdetermined linear equations by linear programming

TL;DR: It is shown that outward k-neighborliness is equivalent to the statement that, whenever y = Ax has a non negative solution with at most k nonzeros, it is the nonnegative solution to y =Ax having minimal sum.
Related Papers (5)