scispace - formally typeset
H

Hongseok Namkoong

Researcher at Columbia University

Publications -  37
Citations -  2972

Hongseok Namkoong is an academic researcher from Columbia University. The author has contributed to research in topics: Robust optimization & Computer science. The author has an hindex of 16, co-authored 29 publications receiving 2065 citations. Previous affiliations of Hongseok Namkoong include Stanford University.

Papers
More filters
Posted Content

Certifying Some Distributional Robustness with Principled Adversarial Training

TL;DR: In this paper, a training procedure that augments model parameter updates with worst-case perturbations of training data is proposed to guarantee moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization.
Proceedings Article

Stochastic Gradient Methods for Distributionally Robust Optimization with f-divergences

TL;DR: This work develops efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance and solves the resulting minimax problems with nearly the same computational cost of stochastic gradient descent.
Proceedings Article

Certifying Some Distributional Robustness with Principled Adversarial Training

TL;DR: In this article, a training procedure that augments model parameter updates with worst-case perturbations of training data is proposed to guarantee moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization.
Proceedings Article

Generalizing to Unseen Domains via Adversarial Data Augmentation

TL;DR: This work proposes an iterative procedure that augments the dataset with examples from a fictitious target domain that is "hard" under the current model, and shows that the method is an adaptive data augmentation method where the authors append adversarial examples at each iteration.
Posted Content

Learning Models with Uniform Performance via Distributionally Robust Optimization

TL;DR: A distributionally robust stochastic optimization framework that learns a model providing good performance against perturbations to the data-generating distribution is developed, and a convex formulation for the problem is given, providing several convergence guarantees.