scispace - formally typeset
D

Dmitriy Drusvyatskiy

Researcher at University of Washington

Publications -  116
Citations -  3059

Dmitriy Drusvyatskiy is an academic researcher from University of Washington. The author has contributed to research in topics: Convex function & Subgradient method. The author has an hindex of 26, co-authored 108 publications receiving 2310 citations. Previous affiliations of Dmitriy Drusvyatskiy include Cornell University.

Papers
More filters
Posted Content

Second-order growth, tilt stability, and metric regularity of the subdifferential

TL;DR: In this paper, the authors established new relationships between second-order growth conditions on functions, the basic properties of metric regularity and subregularity of the limiting subdifferential, tilt-stability of local minimizers, and positive-definiteness/semidefiniteness properties of the second order Hessian.
Posted Content

The nonsmooth landscape of phase retrieval

TL;DR: In this paper, the authors consider a nonsmooth formulation of the real phase retrieval problem and show that under standard statistical assumptions, a simple subgradient method converges linearly when initialized within a constant relative distance of an optimal solution.
Book

The Many Faces of Degeneracy in Conic Optimization

TL;DR: In this article, the authors describe various reasons for the loss of strict feasibility, whether due to poor modelling choices or (more interestingly) rich underlying structure, and discuss ways to cope with it and, in many pronounced cases, how to use it as an advantage.
Posted Content

Subgradient methods for sharp weakly convex functions

TL;DR: This work shows that the same is true for sharp functions that are only weakly convex, provided that the subgradient methods are initialized within a fixed tube around the solution set.
Posted Content

Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence

TL;DR: This framework subsumes such important computational tasks as phase retrieval, blind deconvolution, quadratic sensing, matrix completion, and robust PCA and shows that nonsmooth penalty formulations do not suffer from the same type of ill-conditioning.