J
Jinhyun So
Researcher at University of Southern California
Publications - 20
Citations - 821
Jinhyun So is an academic researcher from University of Southern California. The author has contributed to research in topics: Computer science & Scalability. The author has an hindex of 9, co-authored 18 publications receiving 348 citations. Previous affiliations of Jinhyun So include KAIST.
Papers
More filters
Posted Content
FedML: A Research Library and Benchmark for Federated Machine Learning
Chaoyang He,Songze Li,Jinhyun So,Mi Zhang,Hongyi Wang,Xiaoyang Wang,Praneeth Vepakomma,Abhishek Singh,Hang Qiu,Li Shen,Peilin Zhao,Kang Yan,Yang Liu,Ramesh Raskar,Qiang Yang,Murali Annavaram,A. Salman Avestimehr +16 more
TL;DR: FedML is introduced, an open research library and benchmark that facilitates the development of new federated learning algorithms and fair performance comparisons and can provide an efficient and reproducible means of developing and evaluating algorithms for the Federated learning research community.
Journal ArticleDOI
Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning
TL;DR: This article proposes the first secure aggregation framework, named Turbo-Aggregate, which employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy.
Journal ArticleDOI
Byzantine-Resilient Secure Federated Learning
TL;DR: This paper presents the first single-server Byzantine-resilient secure aggregation framework (BREA) for secure federated learning, based on an integrated stochastic quantization, verifiable outlier detection, and secure model aggregation approach to guarantee Byzantine- Resilience, privacy, and convergence simultaneously.
Posted Content
CodedPrivateML: A Fast and Privacy-Preserving Framework for Distributed Machine Learning.
Journal ArticleDOI
CodedPrivateML: A Fast and Privacy-Preserving Framework for Distributed Machine Learning
TL;DR: CodedPrivateML keeps both the data and the model information-theoretically private, while allowing efficient parallelization of training across distributed workers, and provides significant speedup over cryptographic approaches based on multi-party computing (MPC).