M
Michael I. Jordan
Researcher at University of California, Berkeley
Publications - 1110
Citations - 241763
Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.
Papers
More filters
Posted Content
Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data
TL;DR: A perturbation- based method, Greedy Attack, and a scalable learning-based method, Gumbel Attack, are derived that illustrate various tradeoffs in the design of attacks.
Journal ArticleDOI
A Dual Receptor Crosstalk Model of G-Protein-Coupled Signal Transduction
Patrick Flaherty,Mala L. Radhakrishnan,Tuan A. Dinh,Robert A. Rebres,Tamara I. A. Roach,Michael I. Jordan,Adam P. Arkin,Adam P. Arkin +7 more
TL;DR: A formal hypothesis is developed, in the form of a kinetic model, for the mechanism of action of this GPCR signal transduction system that predicts a synergistic region in the calcium peak height dose response that results when cells are simultaneously stimulated by C5a and UDP.
Proceedings Article
Recursive Algorithms for Approximating Probabilities in Graphical Models
TL;DR: A recursive node-elimination formalism for efficiently approximating large probabilistic networks and shows that Boltzmann machines, sigmoid belief networks, or any combination (i.e., chain graphs) can be handled within the same framework.
Posted Content
Learning Without Mixing: Towards A Sharp Analysis of Linear System Identification
TL;DR: It is proved that the ordinary least-squares (OLS) estimator attains nearly minimax optimal performance for the identification of linear dynamical systems from a single observed trajectory, and generalizes the technique to provide bounds for a more general class of linear response time-series.
Proceedings Article
Averaging Stochastic Gradient Descent on Riemannian Manifolds
TL;DR: In this paper, a geometric framework was developed to transform a sequence of slowly converging iterates generated from stochastic gradient descent (SGD) on a Riemannian manifold to an averaged iterate sequence with a robust and fast $O(1/n) convergence rate.