M
Michael I. Jordan
Researcher at University of California, Berkeley
Publications - 1110
Citations - 241763
Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.
Papers
More filters
Proceedings Article
Adding vs. Averaging in Distributed Primal-Dual Optimization
TL;DR: A novel generalization of the recent communication-efficient primal-dual framework (COCOA) for distributed optimization, which allows for additive combination of local updates to the global parameters at each iteration, whereas previous schemes with convergence guarantees only allow conservative averaging.
Posted Content
SparkNet: Training Deep Networks in Spark
TL;DR: SparkNet as mentioned in this paper is a framework for training deep networks in Spark, which includes a convenient interface for reading data from Spark RDDs, a Scala interface to the Caffe deep learning framework, and a lightweight multi-dimensional tensor library.
Journal ArticleDOI
Artificial Intelligence—The Revolution Hasn’t Happened Yet
TL;DR: The authors praise Jordan for bringing much needed clarity about the current status of Artificial Intelligence (AI) and explain the current challenges lying ahead and outlining what is missing and remains to be done.
Journal ArticleDOI
First-order methods almost always avoid strict saddle points
Jason D. Lee,Ioannis Panageas,Georgios Piliouras,Max Simchowitz,Michael I. Jordan,Benjamin Recht +5 more
TL;DR: It is established that first-order methods avoid strict saddle points for almost all initializations, and neither access to second-order derivative information nor randomness beyond initialization is necessary to provably avoid strict Saddle points.