M
Michael I. Jordan
Researcher at University of California, Berkeley
Publications - 1110
Citations - 241763
Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.
Papers
More filters
Proceedings Article
Feature selection for high-dimensional genomic microarray data
TL;DR: This paper reports on the successful application of feature selection methods to a classification problem in molecular biology involving only 72 data points in a 7130 dimensional space and investigates regularization methods as an alternative to feature selection.
Posted Content
High-Dimensional Continuous Control Using Generalized Advantage Estimation
TL;DR: The authors proposed a trust region optimization procedure for both the policy and the value function, which are represented by neural networks, which yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground.
Journal ArticleDOI
Sensorimotor adaptation in speech production.
John F. Houde,Michael I. Jordan +1 more
TL;DR: It was found that speakers learn to adjust their production of a vowel to compensate for feedback alterations that change the vowel's perceived phonetic identity; moreover, the effect generalizes across phonetic contexts and to different vowels.
Journal ArticleDOI
Learning Dependency-Based Compositional Semantics
TL;DR: A new semantic formalism, dependency-based compositional semantics (DCS) is developed and a log-linear distribution over DCS logical forms is defined and it is shown that the system obtains comparable accuracies to even state-of-the-art systems that do require annotated logical forms.
Proceedings Article
Theoretically Principled Trade-off between Robustness and Accuracy
TL;DR: TRADES as mentioned in this paper decomposes the prediction error for adversarial examples (robust error) as the sum of the natural (classification) error and boundary error, and provides a differentiable upper bound using the theory of classification-calibrated loss.