scispace - formally typeset
M

Michael I. Jordan

Researcher at University of California, Berkeley

Publications -  1110
Citations -  241763

Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.

Papers
More filters
Proceedings Article

On the Theory of Variance Reduction for Stochastic Gradient Monte Carlo

TL;DR: In this article, the authors provide convergence guarantees in Wasserstein distance for a variety of variance-reduction methods: SAGA Langevin diffusion, SVRG LDA, and control-variate underdamped Langevin LDA.
Posted Content

A Swiss Army Infinitesimal Jackknife

TL;DR: A linear approximation to the dependence of the fitting procedure on the weights is used, producing results that can be faster than repeated re-fitting by an order of magnitude and support the application of the infinitesimal jackknife to a wide variety of practical problems in machine learning.
Journal ArticleDOI

ML-LOO: Detecting Adversarial Examples with Feature Attribution

TL;DR: This work introduces a new framework to detect adversarial examples through thresholding a scale estimate of feature attribution scores, and extends it to include multi-layer feature attributions in order to tackle attacks that have mixed confidence levels.
Journal Article

Spectral methods meet EM: a provably optimal algorithm for crowdsourcing

TL;DR: In this article, a two-stage efficient algorithm for multi-class crowd labeling problems is proposed, where the first stage uses the spectral method to obtain an initial estimate of parameters, and the second stage refines the estimation by optimizing the objective function of the Dawid-Skene estimator via the EM algorithm.
Posted Content

Optimality guarantees for distributed statistical estimation

TL;DR: This work defines and studies some refinements of the classical minimax risk that apply to distributed settings, comparing to the performance of estimators with access to the entire data, and establishes lower bounds on these quantities.