A
Ameet Talwalkar
Researcher at Carnegie Mellon University
Publications - 135
Citations - 19501
Ameet Talwalkar is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Computer science & Hyperparameter optimization. The author has an hindex of 49, co-authored 115 publications receiving 13897 citations. Previous affiliations of Ameet Talwalkar include University of California, Berkeley & Courant Institute of Mathematical Sciences.
Papers
More filters
Book
Foundations of Machine Learning
TL;DR: This graduate-level textbook introduces fundamental concepts and methods in machine learning, and provides the theoretical underpinnings of these algorithms, and illustrates key aspects for their application.
Journal ArticleDOI
Federated Learning: Challenges, Methods, and Future Directions
TL;DR: In this paper, the authors discuss the unique characteristics and challenges of federated learning, provide a broad overview of current approaches, and outline several directions of future work that are relevant to a wide range of research communities.
Journal Article
MLlib: machine learning in apache spark
Xiangrui Meng,Joseph K. Bradley,Burak Yavuz,Evan R. Sparks,Shivaram Venkataraman,Davies Liu,Jeremy Freeman,DB Tsai,Manish Amde,Sean Owen,Doris Xin,Reynold Xin,Michael J. Franklin,Reza Bosagh Zadeh,Matei Zaharia,Ameet Talwalkar +15 more
TL;DR: MLlib as mentioned in this paper is an open-source distributed machine learning library for Apache Spark that provides efficient functionality for a wide range of learning settings and includes several underlying statistical, optimization, and linear algebra primitives.
Federated Optimization in Heterogeneous Networks
TL;DR: This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work.
Proceedings Article
Federated multi-task learning
TL;DR: In this paper, the authors propose a novel systems-aware optimization method, MOCHA, that is robust to practical systems issues, such as high communication cost, stragglers, and fault tolerance for distributed multi-task learning.