Journal ArticleDOI
Two-Point Step Size Gradient Methods
TLDR
Etude de nouvelles methodes de descente suivant le gradient for the solution approchee du probleme de minimisation sans contrainte. as mentioned in this paper.Abstract:
Etude de nouvelles methodes de descente suivant le gradient pour la solution approchee du probleme de minimisation sans contrainte. Analyse de la convergenceread more
Citations
More filters
Book
Machine Learning : A Probabilistic Perspective
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Book
Proximal Algorithms
Neal Parikh,Stephen Boyd +1 more
TL;DR: The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.
Journal ArticleDOI
Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems
TL;DR: This paper proposes gradient projection algorithms for the bound-constrained quadratic programming (BCQP) formulation of these problems and test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method.
Journal ArticleDOI
Message-passing algorithms for compressed sensing
TL;DR: A simple costless modification to iterative thresholding is introduced making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures, inspired by belief propagation in graphical models.
Journal ArticleDOI
Probing the Pareto Frontier for Basis Pursuit Solutions
TL;DR: A root-finding algorithm for finding arbitrary points on a curve that traces the optimal trade-off between the least-squares fit and the one-norm of the solution is described, and it is proved that this curve is convex and continuously differentiable over all points of interest.