scispace - formally typeset
M

Michael I. Jordan

Researcher at University of California, Berkeley

Publications -  1110
Citations -  241763

Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.

Papers
More filters
Journal ArticleDOI

Learning the Kernel Matrix with Semidefinite Programming

TL;DR: This paper shows how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques and leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Journal ArticleDOI

Hierarchical mixtures of experts and the EM algorithm

TL;DR: An Expectation-Maximization (EM) algorithm for adjusting the parameters of the tree-structured architecture for supervised learning and an on-line learning algorithm in which the parameters are updated incrementally.
Journal ArticleDOI

Kalman filtering with intermittent observations

TL;DR: This work addresses the problem of performing Kalman filtering with intermittent observations by showing the existence of a critical value for the arrival rate of the observations, beyond which a transition to an unbounded state error covariance occurs.
Proceedings Article

On Discriminative vs. Generative Classifiers: A comparison of logistic regression and naive Bayes

TL;DR: It is shown, contrary to a widely-held belief that discriminative classifiers are almost always to be preferred, that there can often be two distinct regimes of performance as the training set size is increased, one in which each algorithm does better.
Journal ArticleDOI

Active learning with statistical models

TL;DR: In this article, the optimal data selection techniques have been used with feed-forward neural networks and showed how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression.