scispace - formally typeset
M

Michael I. Jordan

Researcher at University of California, Berkeley

Publications -  1110
Citations -  241763

Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.

Papers
More filters
Posted Content

On Efficient Optimal Transport: An Analysis of Greedy and Accelerated Mirror Descent Algorithms

TL;DR: In this article, a greedy variant of the Sinkhorn algorithm, known as the \emph{Greenkhorn algorithm}, was improved to O(n^2\varepsilon^{-2}) by using a primal-dual formulation and an upper bound for the dual solution.
Journal ArticleDOI

SMaSH: a benchmarking toolkit for human genome variant calling

TL;DR: The proposed SMaSH, a benchmarking methodology for evaluating germline variant calling algorithms, generates synthetic datasets, organizes and interpret a wide range of existing benchmarking data for real genomes and proposes a set of accuracy and computational performance metrics for evaluating variant calling methods on these benchmarked data.
Proceedings ArticleDOI

Real-Time Machine Learning: The Missing Pieces

TL;DR: It is asserted that a new distributed execution framework is needed for such ML applications and a candidate approach with a proof-of-concept architecture that achieves a 63x performance improvement over a state- of-the-art execution framework for a representative application is proposed.
Proceedings ArticleDOI

Nonparametric estimation of the likelihood ratio and divergence functionals

TL;DR: This work develops and analyzes a nonparametric method for estimating the class of f-divergence functionals, and the density ratio of two probability distributions, and obtains an M-estimator for divergences, based on a convex and differentiable optimization problem that can be solved efficiently.
Proceedings Article

Variational Consensus Monte Carlo

TL;DR: The variational consensus Monte Carlo (VCMC) as mentioned in this paper is a variational Bayes algorithm that optimizes over aggregation functions to obtain samples from a distribution that better approximates the target.