M
Michael I. Jordan
Researcher at University of California, Berkeley
Publications - 1110
Citations - 241763
Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.
Papers
More filters
Posted Content
Is Temporal Difference Learning Optimal? An Instance-Dependent Analysis
TL;DR: This work addresses the problem of policy evaluation in discounted Markov decision processes, and provides instance-dependent guarantees on the $\ell_\infty$-error under a generative model, and establishes both asymptotic and non-asymptotic versions of local minimax lower bounds for policy evaluation, thereby providing an instance- dependent baseline by which to compare algorithms.
Posted Content
Splash: User-friendly Programming Interface for Parallelizing Stochastic Algorithms.
Yuchen Zhang,Michael I. Jordan +1 more
TL;DR: This paper proposes a general framework for parallelizing stochastic algorithms on multi-node distributed systems called Splash, which consists of a programming interface and an execution engine and provides theoretical justifications on the optimal rate of convergence.
Proceedings ArticleDOI
Image Denoising with Nonparametric Hidden Markov Trees
TL;DR: A hierarchical, nonparametric statistical model for wavelet representations of natural images that automatically adapts to the complexity of different images and wavelet bases through a Monte Carlo learning algorithm.
Posted Content
A Control-Theoretic Perspective on Optimal High-Order Optimization
Tianyi Lin,Michael I. Jordan +1 more
TL;DR: A control-theoretic perspective on optimal tensor algorithms for minimizing a convex function in a finite-dimensional Euclidean space is provided.
Posted Content
Kernel Feature Selection via Conditional Covariance Minimization
TL;DR: The authors proposed a method for feature selection that employs kernel-based measures of independence to find a subset of covariates that is maximally predictive of the response, and showed how to perform feature selection via a constrained optimization problem involving the trace of the conditional covariance operator.