scispace - formally typeset
D

Donald Goldfarb

Researcher at Columbia University

Publications -  147
Citations -  19081

Donald Goldfarb is an academic researcher from Columbia University. The author has contributed to research in topics: Matrix (mathematics) & Interior point method. The author has an hindex of 52, co-authored 143 publications receiving 17423 citations. Previous affiliations of Donald Goldfarb include City University of New York & City College of New York.

Papers
More filters
Journal ArticleDOI

A family of variable-metric methods derived by variational means

TL;DR: In this paper, a rank-two variable-metric method was derived using Greenstadt's variational approach, which preserves the positive-definiteness of the approximating matrix.
Journal ArticleDOI

An Iterative Regularization Method for Total Variation-Based Image Restoration

TL;DR: A new iterative regularization procedure for inverse problems based on the use of Bregman distances is introduced, with particular focus on problems arising in image processing.
Journal ArticleDOI

Second-order cone programming

TL;DR: SOCP formulations are given for four examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadRatic functions, and many of the problems presented in the survey paper of Vandenberghe and Boyd as examples of SDPs can in fact be formulated as SOCPs and should be solved as such.
Journal ArticleDOI

Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing

TL;DR: In this paper, the authors proposed simple and extremely efficient methods for solving the basis pursuit problem, which is used in compressed sensing, using Bregman iterative regularization, and they gave a very accurate solution after solving only a very small number of instances of the unconstrained problem.
Journal ArticleDOI

Fixed point and Bregman iterative methods for matrix rank minimization

TL;DR: A very fast, robust and powerful algorithm, which the authors call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems and proves convergence of the first of these algorithms.