scispace - formally typeset
A

Abdurakhmon Sadiev

Researcher at Moscow Institute of Physics and Technology

Publications -  16
Citations -  113

Abdurakhmon Sadiev is an academic researcher from Moscow Institute of Physics and Technology. The author has contributed to research in topics: Computer science & Saddle point. The author has an hindex of 3, co-authored 8 publications receiving 22 citations.

Papers
More filters
Book ChapterDOI

Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem

TL;DR: The proposed approach works, at least, like the best existing approaches, but for a special set-up (simplex type constraints and closeness of Lipschitz constants in 1 and 2 norms) the approach reduces n/logn times the required number of oracle calls (function calculations).
Book Chapter

Zeroth-Order Algorithms for Smooth Saddle-Point Problems

TL;DR: Stochastic smooth (strongly) convex-concave saddle-point problems using zeroth-order oracles are solved, and theoretical analysis shows that in the case when the optimization set is a simplex, the authors lose only $\log n$ times in the stochastic convergence term.
Book ChapterDOI

Solving smooth min-min and min-max problems by mixed oracle algorithms

TL;DR: In this paper, the outer minimization problem was considered as a minimization with inexact oracle, which is either minimization or a maximization problem, and an inexact variant of Vaydya's cutting-plane method or a variant of accelerated gradient method was used to solve the outer problem.
Book ChapterDOI

Solving smooth min-min and min-max problems by mixed oracle algorithms

TL;DR: In this paper, the outer minimization problem was considered as a minimization with an inexact oracle, which is either minimization or maximization problem, and a framework was proposed to solve the outer problem with nonasymptotic complexity.
Journal ArticleDOI

Federated Optimization Algorithms with Random Reshuffling and Gradient Compression

TL;DR: This work develops a distributed variant of random reshuffling with gradient compression (Q-RR), and shows how to reduce the variance coming from gradient quantization through the use of control iterates, and proposes a variant of Q-RR called Q-NASTYA to have a better fit to Federated Learning applications.