scispace - formally typeset
Search or ask a question
Author

Ali H. Sayed

Bio: Ali H. Sayed is an academic researcher from École Polytechnique Fédérale de Lausanne. The author has contributed to research in topics: Adaptive filter & Optimization problem. The author has an hindex of 81, co-authored 728 publications receiving 36030 citations. Previous affiliations of Ali H. Sayed include Harbin Engineering University & University of California, Los Angeles.


Papers
More filters
Journal ArticleDOI
TL;DR: A computable condition for checking if the problem is degenerate as well as an efficient algorithm to find the global solution with minimum Euclidean norm are presented.
Abstract: We consider the following problem: $\min_{x \in {\cal R}^n} \min_{\|E\| \le \eta} \|(A+E)x-b\|$, where $A$ is an $m \times n$ real matrix and $b$ is an $n$-dimensional real column vector when it has multiple global minima. This problem is an errors-in-variables problem, which has an important relation to total least squares with bounded uncertainty. A computable condition for checking if the problem is degenerate as well as an efficient algorithm to find the global solution with minimum Euclidean norm are presented.

24 citations

Proceedings ArticleDOI
07 May 2001
TL;DR: This work derives conditions on the step-size for stability, and provides closed form expressions for the steady-state performance of leaky adaptive algorithms that employ a general scalar or matrix data nonlinearity.
Abstract: We study leaky adaptive algorithms that employ a general scalar or matrix data nonlinearity. We perform mean-square analysis of this class of algorithms without imposing restrictions on the distribution of the input signal. In particular, we derive conditions on the step-size for stability, and provide closed form expressions for the steady-state performance.

24 citations

Journal ArticleDOI
TL;DR: It is shown, unlike what original derivations may suggest, that fast fixed-order RLS adaptive algorithms are not limited to FIR filter structures, and that fast recursions in both explicit and array forms exist for more general data structures, such as orthonormally based models.
Abstract: The existing derivations of conventional fast RLS adaptive filters are intrinsically dependent on the shift structure in the input regression vectors. This structure arises when a tapped-delay line (FIR) filter is used as a modeling filter. We show, unlike what original derivations may suggest, that fast fixed-order RLS adaptive algorithms are not limited to FIR filter structures. We show that fast recursions in both explicit and array forms exist for more general data structures, such as orthonormally based models. One of the benefits of working with orthonormal bases is that fewer parameters can be used to model long impulse responses.

24 citations

Proceedings ArticleDOI
26 May 2020
TL;DR: This work considers a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data, and establishes that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
Abstract: Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments. While many federated learning architectures process data in an online manner, and are hence adaptive by nature, most performance analyses assume static optimization problems and offer no guarantees in the presence of drifts in the problem solution or data characteristics. We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data. Under a nonstationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm. The results clarify the trade-off between convergence and tracking performance.

23 citations

Proceedings ArticleDOI
20 Mar 2016
TL;DR: The results establish that momentum methods are equivalent to the standard stochastic gradient method with a re-scaled (larger) step-size value, and suggests a method to enhance performance in the Stochastic setting by tuning the momentum parameter over time.
Abstract: This paper examines the convergence rate and mean-square-error performance of momentum stochastic gradient methods in the constant step-size and slow adaptation regime. The results establish that momentum methods are equivalent to the standard stochastic gradient method with a re-scaled (larger) step-size value. The equivalence result is established for all time instants and not only in steady-state. The analysis is carried out for general risk functions, and is not limited to quadratic risks. One notable conclusion is that the well-known benefits of momentum constructions for deterministic optimization problems do not necessarily carry over to the stochastic setting when gradient noise is present and continuous adaptation is necessary. The analysis suggests a method to enhance performance in the stochastic setting by tuning the momentum parameter over time.

23 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: This survey provides an overview of higher-order tensor decompositions, their applications, and available software.
Abstract: This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.

9,227 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI

6,278 citations

01 Jan 2016
TL;DR: The table of integrals series and products is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can get it instantly.
Abstract: Thank you very much for downloading table of integrals series and products. Maybe you have knowledge that, people have look hundreds times for their chosen books like this table of integrals series and products, but end up in harmful downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their laptop. table of integrals series and products is available in our book collection an online access to it is set as public so you can get it instantly. Our book servers saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the table of integrals series and products is universally compatible with any devices to read.

4,085 citations