M
Michael I. Jordan
Researcher at University of California, Berkeley
Publications - 1110
Citations - 241763
Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.
Papers
More filters
Journal ArticleDOI
Genome-Wide Requirements for Resistance to Functionally Distinct DNA-Damaging Agents
William Lee,Robert P. St.Onge,Michael Proctor,Patrick Flaherty,Patrick Flaherty,Michael I. Jordan,Adam P. Arkin,Adam P. Arkin,Ronald W. Davis,Corey Nislow,Guri Giaever +10 more
TL;DR: Clustering the data for 12 distinct compounds uncovered both known and novel functional interactions that comprise the DNA-damage response and allowed us to define the genetic determinants required for repair of interstrand cross-links.
Proceedings Article
Hierarchies of adaptive experts
TL;DR: A neural network architecture that discovers a recursive decomposition of its input space that uses competition among networks to recursively split the input space into nested regions and to learn separate associative mappings within each region.
Proceedings Article
Tree-Structured Stick Breaking for Hierarchical Data
TL;DR: This paper uses nested stick-breaking processes to allow for trees of unbounded width and depth, where data can live at any node and are infinitely exchangeable, and applies the method to hierarchical clustering of images and topic modeling of text data.
Posted Content
Non-convex Finite-Sum Optimization Via SCSG Methods
TL;DR: A class of algorithms, as variants of the stochastically controlled stochastic gradient methods (SCSG) methods, for the smooth non-convex finite-sum optimization problem, which demonstrates that SCSG outperforms stochastics gradient methods on training multi-layers neural networks in terms of both training and validation loss.
Journal ArticleDOI
Mixed Memory Markov Models: Decomposing Complex Stochastic Processes as Mixtures of Simpler Ones
TL;DR: A set of generalized Baum-Welch updates for factorial hidden Markov models that make use of the transition matrices of these models as a convex combination—or mixture—of simpler dynamical models are derived.