J
Jeremy M. Cohen
Researcher at Carnegie Mellon University
Publications - 5
Citations - 1505
Jeremy M. Cohen is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Smoothing & Robustness (computer science). The author has an hindex of 5, co-authored 5 publications receiving 950 citations.
Papers
More filters
Posted Content
Certified Adversarial Robustness via Randomized Smoothing
TL;DR: Strong empirical results suggest that randomized smoothing is a promising direction for future research into adversarially robust classification on smaller-scale datasets where competing approaches to certified $\ell_2$ robustness are viable, smoothing delivers higher certified accuracies.
Proceedings Article
Certified Adversarial Robustness via Randomized Smoothing
TL;DR: In this paper, randomized smoothing is used to obtain an ImageNet classifier with a certified top-1 accuracy of 49% under adversarial perturbations with less than 0.5.
Posted Content
Are Perceptually-Aligned Gradients a General Property of Robust Classifiers?
TL;DR: This paper shows that perceptually-aligned gradients also occur under randomized smoothing, an alternative means of constructing adversarially-robust classifiers, and supports the hypothesis that perceptual- aligned gradients may be a general property of robust classifiers.
Proceedings Article
Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability
TL;DR: The authors empirically demonstrate that full-batch gradient descent on neural network training objectives typically operates in a regime called the Edge of Stability, where the leading eigenvalue of the training loss Hessian hovers just above the value 2/(step size).
Posted Content
Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability
TL;DR: The authors empirically demonstrate that full-batch gradient descent on neural network training objectives typically operates in a regime called the Edge of Stability, where the maximum eigenvalue of the training loss Hessian hovers just above the numerical value $2 / \text{(step size)$, and the training losses behave non-monotonically over short timescales, yet consistently decreases over long timesCales.