Patent
Systems and methods of distributed optimization
TLDR
In this article, the authors provide a system and methods of determining a global model from a plurality of user devices, where each local update can be determined by the respective user device based at least in part on one or more data examples stored on the user device.Abstract:
Systems and methods of determining a global model are provided. In particular, one or more local updates can be received from a plurality of user devices. Each local update can be determined by the respective user device based at least in part on one or more data examples stored on the user device. The one or more data examples stored on the plurality of user devices are distributed on an uneven basis, such that no user device includes a representative sample of the overall distribution of data examples. The local updates can then be aggregated to determine a global model.read more
Citations
More filters
Patent
Machine learning in edge analytics
Nagaraju Pradeep Baliganapalli,Adam Jamison Oliner,Gilmore Brian Matthew,Dean Erick Anthony,Jiahan Wang +4 more
TL;DR: Disclosed is a technique that can be performed by an electronic device as discussed by the authors, which can include generating raw data based on inputs to the electronic device, and sending the raw data or data items over a network to a server computer system.
Patent
Analytics for edge devices
Nagaraju Pradeep Baliganapalli,Adam Jamison Oliner,Gilmore Brian Matthew,Dean Erick Anthony,Jiahan Wang +4 more
TL;DR: In this paper, the authors present a technique that can be performed by an electronic device to generate timestamped events including raw data generated by the electronic device, where the new operation can be executed in accordance with the new instructions to obtain new results.
Patent
Systems and Methods for Distributed On-Device Learning with Data-Correlated Availability
TL;DR: In this paper, the authors present systems and methods for distributed training of machine learning models, which includes obtaining, by one or more computing devices, a plurality of regions based at least in part on temporal availability of user devices, and providing a current version of a machine-learned model associated with the region to the plurality of selected user devices within the region.
Patent
Data sovereignty compliant machine learning
TL;DR: In this paper, the authors present a system for managing the deployment and updating of incremental machine learning models across multiple geographic sovereignties, where the system is configured to perform operations including: receiving a first machine learning model via a first coordination agent, sending the first ML model to a second coordination agent in a second sovereign region, and receiving a second ML model from the second coordination agent, which is based on updates to the first model using a second training data set corresponding with the second sovereign regions.
Patent
Distributed training method and device for machine learning model and computer equipment
Huang Hao,Qu Wei,Hou Haoxiang +2 more
TL;DR: In this paper, a distributed training method of a machine learning model is presented, where the method is executed by working nodes of a distributed node cluster, the distributed node clustering can be achieved through a cloud server, and the method comprises the following steps: acquiring more than one group of training samples, and processing each group according to current training parameters of a model to obtain corresponding parameter gradients; determining a current local gradient based on the parameter gradient respectively corresponding to each group, wherein the transmitted local gradient is used for indicating the parameter node to determine the current global gradient.
References
More filters
Dissertation
Learning Multiple Layers of Features from Tiny Images
TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.
Journal ArticleDOI
Distributed Subgradient Methods for Multi-Agent Optimization
Angelia Nedic,Asuman Ozdaglar +1 more
TL;DR: The authors' convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Proceedings Article
Communication-Efficient Learning of Deep Networks from Decentralized Data
TL;DR: In this paper, the authors presented a decentralized approach for federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets.
Posted Content
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konečný,H. Brendan McMahan,Felix X. Yu,Peter Richtárik,Ananda Theertha Suresh,Dave Bacon +5 more
TL;DR: Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
Proceedings Article
Accelerating Stochastic Gradient Descent using Predictive Variance Reduction
Rie Johnson,Tong Zhang +1 more
TL;DR: It is proved that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG), but the analysis is significantly simpler and more intuitive.