scispace - formally typeset
M

Michael I. Jordan

Researcher at University of California, Berkeley

Publications -  1110
Citations -  241763

Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.

Papers
More filters
Journal ArticleDOI

Computational consequences of a bias toward short connections

TL;DR: This paper presents simulations that show that systems that learn under the influence of an architectural bias have a number of desirable properties, including a tendency to decompose tasks into subtasks, to decouple the dynamics of recurrent subsystems, and to develop location-sensitive internal representations.
Proceedings Article

A General Analysis of the Convergence of ADMM

TL;DR: This work provides a new proof of the linear convergence of the alternating direction method of multipliers when one of the objective terms is strongly convex, and demonstrates that minimizing the derived bound on the convergence rate provides a practical approach to selecting algorithm parameters for particular ADMM instances.
Posted Content

A Linearly-Convergent Stochastic L-BFGS Algorithm

TL;DR: In this paper, a new stochastic L-BFGS algorithm was proposed and proved to have a linear convergence rate for strongly convex and smooth functions, and the algorithm was shown to perform well for a wide range of step sizes.

Bayesian nonparametric latent feature models

TL;DR: This dissertation summarizes the work advancing the state of the art in all three of these areas of research in Warriors for Bayesian nonparametric latent feature models, presenting a non-exchangeable framework for generalizing and extending the original priors and introducing four concrete generalizations applicable when the authors have prior knowledge about object relationships that can be captured either via a tree or chain.
Proceedings Article

Computing regularization paths for learning multiple kernels

TL;DR: Working in the setting of kernel linear regression and kernel logistic regression, it is shown empirically that the effect of the block 1-norm regularization differs notably from the (non-block) 1- norm regularization commonly used for variable selection, and that the regularization path is of particular value in the block case.