scispace - formally typeset
M

Michael I. Jordan

Researcher at University of California, Berkeley

Publications -  1110
Citations -  241763

Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.

Papers
More filters

Fast Kernel Learning using Sequential Minimal Optimization

TL;DR: Experimental results are presented that show that the proposed novel dual formulation of the QCQP as a second-order cone programming problem is significantly more efficient than the general-purpose interior point methods available in current optimization toolboxes.
Posted Content

Towards Understanding the Transferability of Deep Representations.

TL;DR: This paper tries to understand transferability from the perspectives of improved generalization, optimization and the feasibility of transferability, and demonstrates that transferred models tend to find flatter minima and make the loss landscape more favorable with improved Lipschitzness, which accelerates and stabilizes training substantially.
Posted Content

On divergences, surrogate loss functions and decentralized detection

TL;DR: A general correspondence between a family of loss functions that act as surrogates to 0-1 loss, and the class of Ali-Silvey or f -divergence functionals is developed, which provides the basis for choosing and evaluating surrogate losses frequently used in statistical learning.
Proceedings Article

Approximate Inference A lgorithms for Two-Layer Bayesian Networks

TL;DR: This work presents a class of approximate inference algorithms for graphical models of the QMR-DT type, and gives convergence rates for these algorithms and for the Jaakkola and Jordan (1999) algorithm, and verifies theoretical predictions empirically.
Journal ArticleDOI

Conformal prediction for the design problem

TL;DR: This work introduces a method to quantify predictive uncertainty in such settings by constructing confidence sets for predictions that account for the dependence between the training and test data.