M
Michael I. Jordan
Researcher at University of California, Berkeley
Publications - 1110
Citations - 241763
Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.
Papers
More filters
Posted Content
Rao-Blackwellized Stochastic Gradients for Discrete Distributions
TL;DR: This paper describes a technique that can be applied to reduce the variance of any stochastic gradient estimator, without changing its bias, and demonstrates the improvement it yields on a semi-supervised classification problem and a pixel attention task.
Journal Article
Saturating Splines and Feature Selection.
TL;DR: In this paper, the adaptive regression spline model was extended by incorporating saturation, the natural requirement that a function extend as a constant outside a certain range, and the authors fit saturating splines to data via a convex optimization problem over a space of measures.
Journal ArticleDOI
On the Adaptivity of Stochastic Gradient-Based Optimization
Lihua Lei,Michael I. Jordan +1 more
TL;DR: In this article, the authors show that stochastic gradient-based optimization has been a core enabling methodology in applications to large-scale problems in machine learning and related areas, but despite this progress, the gap between the two methods is still wide.
Posted Content
Asymptotic behavior of $\ell_p$-based Laplacian regularization in semi-supervised learning
TL;DR: It is shown that the effect of the underlying density vanishes monotonically with $p, such that in the limit $p = \infty$, corresponding to the so-called Absolutely Minimal Lipschitz Extension, the estimate $\hat{f}$ is independent of the distribution $P$.
Proceedings Article
On the Convergence Rate of Decomposable Submodular Function Minimization
TL;DR: It is shown that the algorithm converges linearly, and the upper and lower bounds on the rate of convergence are provided, which relies on the geometry of submodular polyhedra and draws on results from spectral graph theory.