L
Ludwig Schmidt
Researcher at University of California, Berkeley
Publications - 113
Citations - 16734
Ludwig Schmidt is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Robustness (computer science). The author has an hindex of 33, co-authored 83 publications receiving 10934 citations. Previous affiliations of Ludwig Schmidt include University of Washington & Massachusetts Institute of Technology.
Papers
More filters
Posted Content
Towards Deep Learning Models Resistant to Adversarial Attacks
TL;DR: This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Proceedings Article
Towards Deep Learning Models Resistant to Adversarial Attacks.
TL;DR: This article studied the adversarial robustness of neural networks through the lens of robust optimization and identified methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.
Proceedings Article
Unlabeled Data Improves Adversarial Robustness
TL;DR: It is proved that unlabeled data bridges the complexity gap between standard and robust classification: a simple semisupervised learning procedure (self-training) achieves high robust accuracy using the same number of labels required for achieving high standard accuracy.
Proceedings Article
Adversarially Robust Generalization Requires More Data
TL;DR: In this paper, the authors study adversarially robust learning from the viewpoint of generalization and show that the sample complexity of robust learning can be significantly larger than that of "standard" learning.
Proceedings ArticleDOI
LAION-5B: An open large-scale dataset for training next generation image-text models
Christoph Schuhmann,Romain Beaumont,Richard Vencu,Cade Gordon,Ross Wightman,Mehdi Cherti,Theo Coombes,Aarush Katta,Clayton Mullis,Mitchell Wortsman,Patrick Schramowski,Srivatsa Kundurthy,Katherine Crowson,Ludwig Schmidt,Robert Kaczmarczyk,Jenia Jitsev +15 more
TL;DR: This work presents LAION-5B a dataset consisting of 5.85 billion CLIP-filtered image-text pairs, of which 2.32B contain English language, and shows successful replication and fine-tuning of foundational models like CLIP, GLIDE and Stable Diffusion using the dataset, and discusses further experiments enabled with an openly available dataset of this scale.