scispace - formally typeset
A

Ali H. Sayed

Researcher at École Polytechnique Fédérale de Lausanne

Publications -  766
Citations -  39568

Ali H. Sayed is an academic researcher from École Polytechnique Fédérale de Lausanne. The author has contributed to research in topics: Adaptive filter & Optimization problem. The author has an hindex of 81, co-authored 728 publications receiving 36030 citations. Previous affiliations of Ali H. Sayed include Harbin Engineering University & University of California, Los Angeles.

Papers
More filters
Posted Content

Learning Kolmogorov Models for Binary Random Variables

TL;DR: This work considers a set of binary random variables and proposes an efficient algorithm for learning the Kolmogorov model, which shows its first-order optimality, despite the combinatorial nature of the learning problem.
Journal ArticleDOI

Regularized Diffusion Adaptation via Conjugate Smoothing

TL;DR: The regularizers are smoothed by means of infimal convolution and it is shown that the Pareto solution of the approximate, smooth problem can be made arbitrarily close to the Solution of the original, non-smooth problem.
Posted Content

Graph-Homomorphic Perturbations for Private Decentralized Learning

TL;DR: This work proposes an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible (to first order in the step-size) to the network centroid, while preserving privacy guarantees.
Journal ArticleDOI

Second-order guarantees in centralized, federated and decentralized nonconvex optimization

TL;DR: Recent results on second-order guarantees for stochastic first-order optimization algorithms in centralized, federated, and decentralized architectures are covered.
Proceedings ArticleDOI

Distributed Coupled Learning Over Adaptive Networks

TL;DR: This work develops an effective distributed algorithm for the solution of stochastic optimization problems that involve partial coupling among both local constraints and local cost functions and establishes mean-square-error convergence of the resulting strategy for sufficiently small step-sizes and large penalty factors.