scispace - formally typeset
M

Michael I. Jordan

Researcher at University of California, Berkeley

Publications -  1110
Citations -  241763

Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.

Papers
More filters
Journal ArticleDOI

Decoding from Pooled Data: Sharp Information-Theoretic Bounds

TL;DR: This paper establishes almost matching upper and lower bounds on the minimum number of queries m such that there is no solution other than the planted one with probability tending to 1 as n → ∞ and shows an identity of independent interest relating the Gaussian integral over the space of Eulerian flows of a graph to its spanning tree polynomial.
Book ChapterDOI

Supervised learning and divide-and-conquer: a statistical approach

TL;DR: The problem of learning the parameters of the model as a maximum likelihood estimation problem is formulated and an Expectation-Maximization (EM) algorithm for the model is developed.
Proceedings ArticleDOI

Learn to Match with No Regret: Reinforcement Learning in Markov Matching Markets

TL;DR: In this paper , the authors study a Markov matching market involving a planner and a set of strategic agents on the two sides of the market and propose a reinforcement learning framework that integrates optimistic value iteration with maximum weight matching.
Proceedings Article

Gen-Oja: Simple & Efficient Algorithm for Streaming Generalized Eigenvector Computation

TL;DR: The global convergence of the proposed algorithm is proved, borrowing ideas from the theory of fast-mixing Markov chains and two-Time-Scale Stochastic Approximation, showing that it achieves the optimal rate of convergence.
Posted Content

Optimal Mean Estimation without a Variance.

TL;DR: This work studies the problem of heavy-tailed mean estimation in settings where the variance of the data-generating distribution does not exist, and establishes a information-theoretic lower bound on the optimal attainable confidence interval.