M
Michael I. Jordan
Researcher at University of California, Berkeley
Publications - 1110
Citations - 241763
Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.
Papers
More filters
Proceedings Article
Estimating Dependency Structure as a Hidden Variable
Marina Meila,Michael I. Jordan +1 more
TL;DR: A family of efficient algorithms that use EM and the Minimum Spanning Tree algorithm to find the ML and MAP mixture of trees for a variety of priors, including the Dirichlet and the MDL priors are presented.
Book ChapterDOI
Mixed Membership Matrix Factorization
TL;DR: In this article, a fully Bayesian framework for integrating discrete mixed membership and continuous latent factor models into unified Mixed Membership Matrix Factorization (M3F) models is developed, and two M3F models, derived Gibbs sampling inference procedures, are introduced and validated on the EachMovie, MovieLens, and Netflix Prize collaborative filtering datasets.
Posted Content
Feature allocations, probability functions, and paintboxes
TL;DR: In this article, a generalization of the clustering problem, called feature allocation, is proposed, where each data point can belong to an arbitrary, non-negative integer number of groups, now called features or topics.
Proceedings Article
Local Privacy and Minimax Bounds: Sharp Rates for Probability Estimation
TL;DR: In this article, the authors provide a detailed study of the estimation of probability distributions in a stringent setting in which data is kept private even from the statistician, and give sharp minimax rates of convergence for estimation in these locally private settings, exhibiting fundamental trade-offs between privacy and convergence rate.
Journal ArticleDOI
EP-GIG priors and applications in bayesian sparse learning
TL;DR: This paper defines such priors as a mixture of exponential power distributions with a generalized inverse Gaussian density (EP-GIG), a variant of generalized hyperbolic distributions, and shows that these algorithms bear an interesting resemblance to iteratively reweighted l2 or l1 methods.