scispace - formally typeset
M

Michael I. Jordan

Researcher at University of California, Berkeley

Publications -  1110
Citations -  241763

Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.

Papers
More filters
Proceedings Article

Dimensionality Reduction for Spectral Clustering

TL;DR: This work introduces an augmented form of spectral clustering in which an explicit projection operator is incorporated in the relaxed optimization functional and optimize this functional over both the projection and the spectral embedding.
Posted Content

On the Complexity of Approximating Multimarginal Optimal Transport.

TL;DR: A first \textit{near-linear time} complexity bound guarantee for approximating the MOT problem and matches the best known complexity bound for the Sinkhorn algorithm in the classical OT setting when $m = 2$.
Proceedings ArticleDOI

Bayesian multi-population haplotype inference via a hierarchical dirichlet process mixture

TL;DR: This paper captures cross-population structure using a nonparametric Bayesian prior known as the hierarchical Dirichlet process (HDP), conjoining this prior with a recently developed Bayesian methodology for haplotype phasing known as DP-Haplotyper and develops an efficient sampling algorithm based on a two-level nested Pólya urn scheme.
Proceedings Article

The phylogenetic Indian Buffet process: a non-exchangeable nonparametric prior for latent features

TL;DR: In this paper, a non-exchangeable prior for a class of nonparametric latent feature models that is nearly as efficient computationally as its exchangeable counterpart is presented. But the model is applicable to the general setting in which the dependencies between objects can be expressed using a tree, where edge lengths indicate the strength of relationships.
Posted Content

Sampling Can Be Faster Than Optimization

TL;DR: This work examines a class of nonconvex objective functions that arise in mixture modeling and multistable systems and finds that the computational complexity of sampling algorithms scales linearly with the model dimension while that of optimization algorithms scales exponentially.