V
Vladimir Braverman
Researcher at Johns Hopkins University
Publications - 185
Citations - 3374
Vladimir Braverman is an academic researcher from Johns Hopkins University. The author has contributed to research in topics: Computer science & Coreset. The author has an hindex of 25, co-authored 158 publications receiving 2475 citations. Previous affiliations of Vladimir Braverman include University of California, Los Angeles & Google.
Papers
More filters
Proceedings ArticleDOI
I Know What You Did Last Summer: Network Monitoring using Interval Queries
TL;DR: This work presents the first integral solution that enables multiple measurement tasks inside the same data structure, supports specifying the time frame of interest as part of its queries, and is sketch-based and thus space efficient.
Proceedings Article
Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate
TL;DR: The theory explains several folk arts in practice used for SGD hyperparameter tuning, such as linearly scaling the initial learning rate with batch size; and overrunning SGD with high learning rate even when the loss stops decreasing.
Journal ArticleDOI
Longitudinal functional and imaging outcome measures in FKRP limb-girdle muscular dystrophy.
Doris G. Leung,Doris G. Leung,Alex E. Bocchieri,Shivani Ahlawat,Michael A. Jacobs,Vishwa S. Parekh,Vishwa S. Parekh,Vladimir Braverman,Katherine Summerton,Jennifer Mansour,Genila Bibat,Carl Morris,Shannon Marraffino,Kathryn R. Wagner,Kathryn R. Wagner +14 more
TL;DR: In this paper, the authors used deep learning algorithms to analyze MRI scans and quantify muscle, fat, and intramuscular fat infiltration in the thighs of adults with limb-girdle muscular dystrophy with cardiomyopathy.
Proceedings ArticleDOI
Pretrained Models for Multilingual Federated Learning
TL;DR: The results show that using pretrained models reduces the negative effects of FL, helping them to perform near or better than centralized (no privacy) learning, even when using non-IID partitioning.
Posted Content
FetchSGD: Communication-Efficient Federated Learning with Sketching
Daniel Rothchild,Ashwinee Panda,Enayat Ullah,Nikita Ivkin,Ion Stoica,Vladimir Braverman,Joseph E. Gonzalez,Raman Arora +7 more
TL;DR: FetchSGD as discussed by the authors compresses model updates using a count sketch, and then takes advantage of the mergeability of sketches to combine model updates from many workers to overcome the communication bottleneck and convergence issues.