scispace - formally typeset
B

Belhal Karimi

Researcher at Baidu

Publications -  25
Citations -  175

Belhal Karimi is an academic researcher from Baidu. The author has contributed to research in topics: Expectation–maximization algorithm & Computer science. The author has an hindex of 6, co-authored 19 publications receiving 104 citations. Previous affiliations of Belhal Karimi include French Institute for Research in Computer Science and Automation & École Polytechnique.

Papers
More filters
Posted Content

Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

TL;DR: This work analyzes a general SA scheme to minimize a non-convex, smooth objective function, and illustrates these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning.
Proceedings Article

Non-asymptotic Analysis of Biased Stochastic Approximation Scheme.

TL;DR: In this article, the drift term depends on a state-dependent Markov chain and the mean field is not necessarily of gradient type, covering approximate second-order method and allowing asymptotic bias for the one-step updates.
Posted Content

FedSKETCH: Communication-Efficient and Private Federated Learning via Sketching

TL;DR: This work introduces FedSKetCH and FedSKETCHGATE algorithms to address both challenges in Federated learning jointly, where these algorithms are intended to be used for homogeneous and heterogeneous data distribution settings respectively.
Posted Content

On the Global Convergence of (Fast) Incremental Expectation Maximization Methods.

TL;DR: This paper analyzes incremental and stochastic version of the EM algorithm as well as the variance reduced-version of [Chen et al., 2018] in a common unifying framework and establishes non-asymptotic convergence bounds for global convergence.
Proceedings Article

Towards Better Generalization of Adaptive Gradient Methods

TL;DR: Stable Adaptive Gradient Descent (SAGD) for non-convex optimization is proposed which leverages differential privacy to boost the generalization performance of adaptive gradient methods.