scispace - formally typeset
V

Vladimir Braverman

Researcher at Johns Hopkins University

Publications -  185
Citations -  3374

Vladimir Braverman is an academic researcher from Johns Hopkins University. The author has contributed to research in topics: Computer science & Coreset. The author has an hindex of 25, co-authored 158 publications receiving 2475 citations. Previous affiliations of Vladimir Braverman include University of California, Los Angeles & Google.

Papers
More filters
Posted Content

Direction Matters: On the Implicit Regularization Effect of Stochastic Gradient Descent with Moderate Learning Rate

TL;DR: This paper makes an initial attempt to characterize the particular regularization effect of SGD in the moderate learning rate regime by studying its behavior for optimizing an overparameterized linear regression problem, and shows that directional bias does matter when early stopping is adopted.
Posted Content

Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate

TL;DR: In this article, the authors make an initial attempt to characterize the particular regularization effect of SGD in the moderate learning rate regime by studying its behavior for optimizing an overparameterized linear regression problem.
Posted Content

Measuring $k$-Wise Independence of Streaming Data under $L_2$ Norm

TL;DR: A novel combinatorial approach to a nalyzing second moment (i.e., variance) of dependent sketches for k-wise independence on a stream of tuples for any constant k.
Journal ArticleDOI

Selective experience replay compression using coresets for lifelong deep reinforcement learning in medical imaging

TL;DR: In this paper , a reward distribution-preserving coreset compression technique was proposed for compressing experience replay buffers stored for selective experience replay. But the method is not suitable for the task of brain tumor segmentation.
Posted Content

Obtaining Adjustable Regularization for Free via Iterate Averaging

TL;DR: An averaging scheme is established that provably converts the iterates of SGD on an arbitrary strongly convex and smooth objective function to its regularized counterpart with an adjustable regularization parameter and shows that the same methods work empirically on more general optimization objectives including neural networks.