scispace - formally typeset
M

Michael I. Jordan

Researcher at University of California, Berkeley

Publications -  1110
Citations -  241763

Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.

Papers
More filters
Proceedings Article

Bayesian multicategory support vector machines

TL;DR: In this paper, the multi-class support vector machine (MSVM) is viewed as a MAP estimation procedure under an appropriate probabilistic interpretation of the classifier, and this interpretation can be extended to a hierarchical Bayesian architecture and to a fully-Bayesian inference procedure for multichannel classification based on data augmentation.
Journal ArticleDOI

On kernel methods for covariates that are rankings

TL;DR: In this article, the Kendall and Mallows kernels are used for regression, classification, and testing problems based on permutation-valued features, as opposed to permutationvalued responses.
Journal ArticleDOI

On surrogate loss functions and $f$-divergences

TL;DR: This work considers an elaboration of binary classification in which the covariates are not available directly but are transformed by a dimensionality-reducing quantizer Q, and makes it possible to pick out the (strict) subset of surrogate loss functions that yield Bayes consistency for joint estimation of the discriminant function and the quantizer.
Posted Content

Robust Optimization for Fairness with Noisy Protected Groups

TL;DR: In this paper, the authors study the consequences of relying on noisy protected group labels and introduce two new approaches using robust optimization that are guaranteed to satisfy fairness criteria on the true protected groups while minimizing a training objective.
Journal ArticleDOI

The asymptotics of ranking algorithms

TL;DR: This work presents a new approach to supervised ranking based on aggregation of partial preferences, and develops $U$-statistic-based empirical risk minimization procedures that yield consistency results that parallel those available for classification.