scispace - formally typeset
A

Aryan Mokhtari

Researcher at University of Texas at Austin

Publications -  157
Citations -  4972

Aryan Mokhtari is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Rate of convergence & Convex function. The author has an hindex of 31, co-authored 135 publications receiving 3232 citations. Previous affiliations of Aryan Mokhtari include University of Pennsylvania & Massachusetts Institute of Technology.

Papers
More filters
Proceedings Article

FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization.

TL;DR: FedPAQ is presented, a communication-efficient Federated Learning method with Periodic Averaging and Quantization that achieves near-optimal theoretical guarantees for strongly convex and non-convex loss functions and empirically demonstrate the communication-computation tradeoff provided by the method.
Proceedings Article

Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach

TL;DR: A personalized variant of the well-known Federated Averaging algorithm is studied and its performance is characterized by how this performance is affected by the closeness of underlying distributions of user data, measured in terms of distribution distances such as Total Variation and 1-Wasserstein metric.
Posted Content

Personalized Federated Learning: A Meta-Learning Approach

TL;DR: A personalized variant of the well-known Federated Averaging algorithm is studied and its performance is characterized by the closeness of underlying distributions of user data, measured in terms of distribution distances such as Total Variation and 1-Wasserstein metric.
Proceedings Article

A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach.

TL;DR: In this paper, two variants of Gradient Descent-Ascent algorithms, extra-gradient (EG) and optimistic gradient descent (OGDA), are used for solving saddle point problems, and they admit a unified analysis as approximations of the classical proximal point method.
Journal ArticleDOI

Network Newton Distributed Optimization Methods

TL;DR: This paper proposes the network Newton (NN) method as a distributed algorithm that incorporates second-order information via distributed implementation of approximations of a suitably chosen Newton step and proves convergence to a point close to the optimal argument at a rate that is at least linear.