scispace - formally typeset
M

Michael I. Jordan

Researcher at University of California, Berkeley

Publications -  1110
Citations -  241763

Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.

Papers
More filters
Posted Content

DAGGER: A sequential algorithm for FDR control on DAGs

TL;DR: The DAGGER algorithm, shorthand for Greedily Evolving Rejections on DAGs, provably controls the false discovery rate under independence, positive dependence or arbitrary dependence of the $p-values.
Proceedings ArticleDOI

On optimal quantization rules for sequential decision problems

TL;DR: A negative answer to the question whether optimal local decision functions for the Bayesian formulation of sequential decentralized detection can be found within the class of stationary rules is provided by exploiting an asymptotic approximation to the optimal cost of stationary quantization rules, and the asymmetry of the Kullback-Leibler divergences.
Proceedings Article

Random Conic Pursuit for Semidefinite Programming

TL;DR: A novel algorithm that solves semidefinite programs (SDPs) via repeated optimization over randomly selected two-dimensional subcones of the PSD cone is presented, which is simple, easily implemented, applicable to very general SDPs, scalable, and theoretically interesting.
Posted Content

Tree-Structured Stick Breaking Processes for Hierarchical Data

TL;DR: This paper uses nested stick-breaking processes to allow for trees of unbounded width and depth, where data can live at any node and are infinitely exchangeable, and applies the method to hierarchical clustering of images and topic modeling of text data.
Proceedings Article

Projection Robust Wasserstein Distance and Riemannian Optimization

TL;DR: In this paper, it was shown that the Wasserstein projection pursuit (WPP) distance can be efficiently computed in practice using Riemannian optimization, yielding in relevant cases better behavior than its convex relaxation.