scispace - formally typeset
Search or ask a question
Author

Ali H. Sayed

Bio: Ali H. Sayed is an academic researcher from École Polytechnique Fédérale de Lausanne. The author has contributed to research in topics: Adaptive filter & Optimization problem. The author has an hindex of 81, co-authored 728 publications receiving 36030 citations. Previous affiliations of Ali H. Sayed include Harbin Engineering University & University of California, Los Angeles.


Papers
More filters
Journal ArticleDOI
TL;DR: The results indicate that whether temporal processing is performed before or after adaptation, the strategy that performs adaptation before spatial cooperation leads to smaller mean-square error.
Abstract: We present diffusion algorithms for distributed estimation and detection over networks that endow all nodes with both spatial cooperation abilities and temporal processing abilities. Each node in the network is allowed to share information locally with its neighbors; this step amounts to sharing and processing of spatial data. At the same time, each node is allowed to after and process past estimates to improve estimation accuracy through an overall collaborative process. In this manner, the resulting distributed algorithms consist of three stages: adaptation, spatial processing, and temporal processing. Moreover, the order of these three stages can be interchanged leading to a total of six variations. The results indicate that whether temporal processing is performed before or after adaptation, the strategy that performs adaptation before spatial cooperation leads to smaller mean-square error. The additional temporal processing step is useful in combating perturbations due to noise over the communications links. We further describe an application in the context of distributed detection and provide computer simulations to illustrate and support the findings.

24 citations

Journal ArticleDOI
TL;DR: The authors extend the concept of displacement structure to time-variant matrices and use it to efficiently and recursively propagate the Cholesky factor of such matrices to solve the normal equations that arise in adaptive least-squares filtering.
Abstract: The authors extend the concept of displacement structure to time-variant matrices and use it to efficiently and recursively propagate the Cholesky factor of such matrices. A natural implementation of the algorithm is via a modular triangular array of processing elements. When the algorithm is applied to solve the normal equations that arise in adaptive least-squares filtering, they get the so-called QR algorithm, with the extra bonus of a parallelizable procedure for determining the weight vector. It is shown that the general algorithm can also be implemented in time-variant lattice form; a specialization of this result yields a time-variant Schur algorithm. >

24 citations

Proceedings Article
01 Aug 2011
TL;DR: Numerical examples show that cooperative spectrum sensing improves the performance of the swarm-based resource allocation technique considerably, and this paper employs adaptive diffusion techniques to estimate the interference profile in a cooperative manner.
Abstract: The goal of this paper is to study the learning abilities of adaptive networks in the context of cognitive radio networks and to investigate how well they assist in allocating power and communications resources in the frequency domain. The allocation mechanism is based on a social foraging swarm model that lets every node allocate its resources (power/bits) in the frequency regions where the interference is at a minimum while avoiding collisions with other nodes. We employ adaptive diffusion techniques to estimate the interference profile in a cooperative manner and to guide the motion of the swarm individuals in the resource domain. A mean square performance analysis of the proposed strategy is provided and confirmed by simulation results. Numerical examples show that cooperative spectrum sensing improves the performance of the swarm-based resource allocation technique considerably.

24 citations

Proceedings ArticleDOI
15 Dec 1993
TL;DR: In this paper, it was shown that the LMS algorithm is a minimizer of the H/sup /spl infin// error norm, and that the normalized LMS minimizes the energy gain from disturbances to the predicted errors.
Abstract: Shows that the celebrated LMS (least-mean squares) adaptive algorithm is an H/sup /spl infin// optimal filter In other words, the LMS algorithm, which has long been regarded as an approximate least-mean squares solution, is in fact a minimizer of the H/sup /spl infin// error norm In particular, the LMS minimizes the energy gain from the disturbances to the predicted errors, while the normalized LMS minimizes the energy gain from the disturbances to the filtered errors Moreover, since these algorithms are central H/sup /spl infin// filters, they are also risk-sensitive optimal and minimize a certain exponential cost function The authors discuss various implications of these results, and show how they provide theoretical justification for the widely observed excellent robustness properties of the LMS filter >

24 citations

Proceedings ArticleDOI
04 May 2014
TL;DR: This work proposes a distributed mechanism for the nodes to switch from using fixed doubly-stochastic combination weights to adaptive combination weights, and by knowing when to switch, the agents are able to enhance their steady-state mean-square-error performance without degrading the rate of convergence during the transient phase of the learning algorithm.
Abstract: We show how the convergence time of an adaptive network can be estimated in a distributed manner by the agents. Using this procedure, we propose a distributed mechanism for the nodes to switch from using fixed doubly-stochastic combination weights to adaptive combination weights. By doing so, and by knowing when to switch, the agents are able to enhance their steady-state mean-square-error performance without degrading the rate of convergence during the transient phase of the learning algorithm.

24 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: This survey provides an overview of higher-order tensor decompositions, their applications, and available software.
Abstract: This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.

9,227 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI

6,278 citations

01 Jan 2016
TL;DR: The table of integrals series and products is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can get it instantly.
Abstract: Thank you very much for downloading table of integrals series and products. Maybe you have knowledge that, people have look hundreds times for their chosen books like this table of integrals series and products, but end up in harmful downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their laptop. table of integrals series and products is available in our book collection an online access to it is set as public so you can get it instantly. Our book servers saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the table of integrals series and products is universally compatible with any devices to read.

4,085 citations