scispace - formally typeset
Search or ask a question
Author

Ali H. Sayed

Bio: Ali H. Sayed is an academic researcher from École Polytechnique Fédérale de Lausanne. The author has contributed to research in topics: Adaptive filter & Optimization problem. The author has an hindex of 81, co-authored 728 publications receiving 36030 citations. Previous affiliations of Ali H. Sayed include Harbin Engineering University & University of California, Los Angeles.


Papers
More filters
Proceedings ArticleDOI
01 Nov 2011
TL;DR: This paper derives continuous-time diffusion adaptive algorithms, which can help provide more accurate models for exchanges of information, and also for systems with large variations in their time constants.
Abstract: Adaptive diffusion models endow networks with distributed learning and cognitive abilities. These models have been applied recently to emulate various forms of complex and self-organized patterns of behavior encountered in biological networks. In diffusion adaptation, nodes share information with their neighbors in real-time, and the network evolves towards a common objective through decentralized coordination and in-network processing. Current models are based on discrete-time adaptive diffusion strategies. However, physical phenomena usually are governed by continuous-time dynamics. In this paper, we derive continuous-time diffusion adaptive algorithms, which can help provide more accurate models for exchanges of information, and also for systems with large variations in their time constants.

4 citations

Journal ArticleDOI
TL;DR: One of the main findings is that diffusion L MS with delays can still converge under the same step-sizes condition of the diffusion LMS without delays.
Abstract: We study the problem of distributed estimation over adaptive networks where communication delays exist between nodes. In particular, we investigate the diffusion Least-Mean- Square (LMS) strategy where delayed intermediate estimates (due to the communication channels) are employed during the combination step. One important question is: Do the delays affect the stability condition and performance? To answer this question, we conduct a detailed performance analysis in the mean and in the mean-square-error sense of the diffusion LMS with delayed estimates. Stability conditions, transient and steady-state mean-square-deviation (MSD) expressions are provided. One of the main findings is that diffusion LMS with delays can still converge under the same step-sizes condition of the diffusion LMS without delays. Finally, simulation results illustrate the theoretical findings.

4 citations

Journal ArticleDOI
TL;DR: In this paper, a decentralized proximal primal-dual algorithm was proposed for multi-agent sharing optimization problems, where each agent owns a local smooth function plus a non-smooth function, and the network seeks to minimize the sum of all local functions plus a coupling composite function.
Abstract: This work considers multi-agent sharing optimization problems, where each agent owns a local smooth function plus a non-smooth function, and the network seeks to minimize the sum of all local functions plus a coupling composite function (possibly non-smooth). For this non-smooth setting, centralized algorithms are known to converge linearly under certain conditions. On the other hand, decentralized algorithms have not been shown to achieve linear convergence under the same conditions. In this work, we propose a decentralized proximal primal-dual algorithm and establish its linear convergence under weaker conditions than existing decentralized works. Our result shows that decentralized algorithms match the linear rate of centralized algorithms without any extra condition. Finally, we provide numerical simulations that illustrate the theoretical findings and show the advantages of the proposed method.

4 citations

Proceedings ArticleDOI
01 Sep 2018
TL;DR: This work develops a fully decentralized variance-reduced learning algorithm for multi-agent networks where nodes store and process the data locally and are only allowed to communicate with their immediate neighbors.
Abstract: This work develops a fully decentralized variance-reduced learning algorithm for multi-agent networks where nodes store and process the data locally and are only allowed to communicate with their immediate neighbors. In the proposed algorithm, there is no need for a central or master unit while the objective is to enable the dispersed nodes to learn the exact global model despite their limited localized interactions. The resulting algorithm is shown to have low memory requirement, guaranteed linear convergence, robustness to failure of links or nodes and scalability to the network size. Moreover, the decentralized nature of the solution makes large-scale machine learning problems more tractable and also scalable since data is stored and processed locally at the nodes.

4 citations

Proceedings ArticleDOI
01 Aug 2016
TL;DR: The analysis provides insight into the interplay between the network topology, the combination weights, and the inference performance, revealing the universal behavior of diffusion-based detectors over adaptive networks.
Abstract: Exploiting recent progress [1]-[4] in the characterization of the detection performance of diffusion strategies over adaptive multi-agent networks: i) we present two theoretical approximations, one based on asymptotic normality and the other based on the theory of exact asymptotics; and ii) we develop an efficient simulation method by tailoring the importance sampling technique to diffusion adaptation. We show that these theoretical and experimental tools complement each other well, with their combination offering a substantial advance for a reliable quantitative detection-performance assessment. The analysis provides insight into the interplay between the network topology, the combination weights, and the inference performance, revealing the universal behavior of diffusion-based detectors over adaptive networks.

4 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: This survey provides an overview of higher-order tensor decompositions, their applications, and available software.
Abstract: This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.

9,227 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI

6,278 citations

01 Jan 2016
TL;DR: The table of integrals series and products is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can get it instantly.
Abstract: Thank you very much for downloading table of integrals series and products. Maybe you have knowledge that, people have look hundreds times for their chosen books like this table of integrals series and products, but end up in harmful downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their laptop. table of integrals series and products is available in our book collection an online access to it is set as public so you can get it instantly. Our book servers saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the table of integrals series and products is universally compatible with any devices to read.

4,085 citations