scispace - formally typeset
Search or ask a question
Author

Ali H. Sayed

Bio: Ali H. Sayed is an academic researcher from École Polytechnique Fédérale de Lausanne. The author has contributed to research in topics: Adaptive filter & Optimization problem. The author has an hindex of 81, co-authored 728 publications receiving 36030 citations. Previous affiliations of Ali H. Sayed include Harbin Engineering University & University of California, Los Angeles.


Papers
More filters
Proceedings ArticleDOI
01 Dec 2013
TL;DR: The steady-state probability distribution of diffusion and consensus strategies that employ constant step-sizes to enable continuous adaptation and learning is studied and it is shown that, in the small step-size regime, the estimation error at each agent approaches a Gaussian distribution.
Abstract: We study the steady-state probability distribution of diffusion and consensus strategies that employ constant step-sizes to enable continuous adaptation and learning. We show that, in the small step-size regime, the estimation error at each agent approaches a Gaussian distribution. More importantly, the covariance matrix of this distribution is shown to coincide with the error covariance matrix that would result from a centralized stochastic-gradient strategy. The results hold regardless of the connected topology and help clarify the convergence and learning behavior of distributed strategies in an interesting way.

6 citations

Journal ArticleDOI
TL;DR: This paper considers model combination methods for adaptive filtering that perform unbiased estimation and studies the steady-state performance of previously introduced methods as well as novel combination algorithms for stationary and nonstationary data that use stochastic gradient updates.
Abstract: In this paper, we consider model combination methods for adaptive filtering that perform unbiased estimation. In this widely studied framework, two adaptive filters are run in parallel, each producing unbiased estimates of an underlying linear model. The outputs of these two filters are combined using another adaptive algorithm to yield the final output of the system. Overall, we require that the final algorithm produce an unbiased estimate of the underlying model. We later specialize this framework where we combine one filter using the least-mean squares (LMS) update and the other filter using the least-mean fourth (LMF) update to decrease cross correlation in between the outputs and improve the overall performance. We study the steady-state performance of previously introduced methods as well as novel combination algorithms for stationary and nonstationary data. These algorithms use stochastic gradient updates instead of the variable transformations used in previous approaches. We explicitly provide steady-state analysis for both stationary and nonstationary environments. We also demonstrate close agreement with the introduced results and the simulations, and show for this specific combination, more than 2 dB gains in terms of excess mean square error with respect to the best constituent filter in the simulations.

6 citations

Proceedings ArticleDOI
17 Jul 2005
TL;DR: A relay strategy for Alamouti space-time coded transmissions is proposed that is optimized to maximize the received SNR and the receiver is able to exploit the orthogonal structure of the code to ensure temporal and spatial diversity.
Abstract: A relay strategy for Alamouti space-time coded transmissions is proposed. The relay structure is optimized to maximize the received SNR and the receiver is able to exploit the orthogonal structure of the code to ensure temporal and spatial diversity

6 citations

Book ChapterDOI
01 Jan 1994
TL;DR: The least-mean-squares (LMS) algorithm was originally conceived as an approximate solution to the above adaptive problem as mentioned in this paper, and it recursively updates the estimates of the weight vector along the direction of the instantaneous gradient of the sum squared error.
Abstract: An important problem that arises in many applications is the following adaptive problem: given a sequence of n × 1 input column vectors {h i }, and a corresponding sequence of desired scalar responses {d i }, find an estimate of an n × 1 column vector of weights w such that the sum of squared errors, \(\sum olimits_{i = 0}^N {{{\left| {{d_i} - h_i^Tw} \right|}^2}}\), is minimized. The {h i ,d i } are most often presented sequentially, and one is therefore required to find an adaptive scheme that recursively updates the estimate of w. The least-mean-squares (LMS) algorithm was originally conceived as an approximate solution to the above adaptive problem. It recursively updates the estimates of the weight vector along the direction of the instantaneous gradient of the sum squared error [1]. The introduction of the LMS adaptive filter in 1960 came as a significant development for a broad range of engineering applications since the LMS adaptive linear-estimation procedure requires essentially no advance knowledge of the signal statistics. The LMS, however, has been long thought to be an approximate minimizing solution to the above squared error criterion, and a rigorous minimization criterion has been missing.

6 citations

Proceedings ArticleDOI
07 Nov 2004
TL;DR: In this paper, the mean-square performance of adaptive filters is studied in terms of stability conditions and expressions for the mean square error and mean square deviation of the filters, as well as the transient performance of the corresponding partially averaged systems.
Abstract: This paper uses averaging analysis to study the mean-square performance of adaptive filters, not only in terms of stability conditions but also in terms of expressions for the mean-square error and the mean-square deviation of the filters, as well as in terms of the transient performance of the corresponding partially averaged systems. The treatment relies on energy conservation arguments. Simulation results illustrate the analysis and the derived performance expressions.

6 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: This survey provides an overview of higher-order tensor decompositions, their applications, and available software.
Abstract: This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.

9,227 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI

6,278 citations

01 Jan 2016
TL;DR: The table of integrals series and products is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can get it instantly.
Abstract: Thank you very much for downloading table of integrals series and products. Maybe you have knowledge that, people have look hundreds times for their chosen books like this table of integrals series and products, but end up in harmful downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their laptop. table of integrals series and products is available in our book collection an online access to it is set as public so you can get it instantly. Our book servers saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the table of integrals series and products is universally compatible with any devices to read.

4,085 citations