scispace - formally typeset
V

Vladimir Braverman

Researcher at Johns Hopkins University

Publications -  185
Citations -  3374

Vladimir Braverman is an academic researcher from Johns Hopkins University. The author has contributed to research in topics: Computer science & Coreset. The author has an hindex of 25, co-authored 158 publications receiving 2475 citations. Previous affiliations of Vladimir Braverman include University of California, Los Angeles & Google.

Papers
More filters
Posted Content

Streaming Quantiles Algorithms with Small Space and Update Time.

TL;DR: A practical variants of the first asymptotically optimal algorithm for approximating quantiles and distributions over streaming data with improved constants is provided by providing a specialized data structure for these sketches which reduces both their memory footprints and update times.
Proceedings Article

Matrix Norms in Data Streams: Faster, Multi-Pass and Row-Order

TL;DR: A number of aspects of estimating matrix norms in a stream that have not previously been considered are considered, and a near-complete characterization of the memory required of row-order algorithms for estimating Schatten-norms of sparse matrices is obtained.
Book ChapterDOI

How to Catch L 2 -Heavy-Hitters on Sliding Windows

TL;DR: This work focuses on the problem of finding L 2-heavy elements in streaming data, which has remained completely open despite multiple papers and considerable success in finding L 1- heavy elements.
Proceedings ArticleDOI

The Power of Uniform Sampling for Coresets

TL;DR: The meta-theorem reduces the task of coreset construction to one on a bounded number of ring instances with a much-relaxed additive error, which enables us to construct coresets using uniform sampling, in contrast to the widely-used importance sampling, and consequently they can easily handle constrained objectives.
Posted Content

Data-Independent Structured Pruning of Neural Networks via Coresets

TL;DR: This work proposes the first efficient structured pruning algorithm with a provable tradeoff between its compression rate and the approximation error for any future test sample, based on the coreset framework, and it provably guarantees the accuracy of the function for any input x ∈ Rd.