M
Michael I. Jordan
Researcher at University of California, Berkeley
Publications - 1110
Citations - 241763
Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.
Papers
More filters
Journal ArticleDOI
Case report: A rare case of eosinophilic cholecystitis presenting after talc pleurodesis for recurrent pneumothorax
TL;DR: A case of EC is reported following a TP procedure for persistent, secondary pneumothorax, associated with activation of systemic acute inflammatory responses including an increase in serum interleukin-8 (IL-8), which is a potent mediator of eosinophil chemotaxis.
Computational methods for meiotic recombination inference
Michael I. Jordan,Junming Yin +1 more
TL;DR: This thesis investigates two computational problems that arise in studying meiotic recombination and proposes a new statistical model that can jointly estimate the crossover rate, the gene conversion rate and the mean tract length, widely regarded as a very difficult problem.
Journal Article
Instance-Dependent Confidence and Early Stopping for Reinforcement Learning
TL;DR: This work addresses the problem of obtaining sharp instance-dependent confidence regions for the policy evaluation problem and the optimal value estimation problem of an MDP, given access to an instance-optimal algorithm and proposes a data-dependent stopping rule for instance- optimal algorithms.
Proceedings Article
On information divergence measures, surrogate loss functions and decentralized hypothesis testing
TL;DR: A general correspondence between two classes of statistical functions: AliSilvey distances and surrogate loss functions is established, showing how to determine the unique f -divergence induced by a given surrogate loss, and characterizing all surrogate Loss functions that realize a given f -Divergence.
Posted Content
A Deep Generative Model for Semi-Supervised Classification with Noisy Labels
TL;DR: A new semi-supervised deep generative model that explicitly models noisy labels, called the Mislabeled VAE (M-VAE), which can perform better than existingDeep generative models which do not account for label noise.