scispace - formally typeset
Open Access

Adaptive estimation algorithms over distributed networks

Reads0
Chats0
TLDR
In this article, an overview of adaptive estimation algorithms over distributed networks is provided, where each node is allowed to communicate with its neighbors in order to exploit the spatial dimension, while it also evolves locally to account for the time dimension.
Abstract
We provide an overview of adaptive estimation algorithms over distributed networks. The algorithms rely on local collaborations and exploit the space-time structure of the data. Each node is allowed to communicate with its neighbors in order to exploit the spatial dimension, while it also evolves locally to account for the time dimension. Algorithms of the least-mean-squares and leastsquares types are described. Both incremental and diffusion strategies are considered.

read more

Citations
More filters
Journal ArticleDOI

Fast Distributed Gradient Methods

TL;DR: In this paper, the authors proposed two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and established their convergence rates in terms of the per-node communications and the pernode gradient evaluations.
Journal ArticleDOI

A Unification and Generalization of Exact Distributed First-Order Methods

TL;DR: This paper unify, generalize, and improve convergence speed of the methods by Shi et al. (2015), Qu and Li (2017), and Nedic et al. (2016), when the underlying network is static and undirected, and establishes for the proposed generalized method global R-linear convergence rate under strongly convex costs with Lipschitz continuous gradients.
Proceedings ArticleDOI

Distributed nonlinear Kalman filtering with applications to wireless localization

TL;DR: The resulting algorithms are robust to node and link failure, scalable, and fully distributed, in the sense that no fusion center is required, and nodes communicate with their neighbors only.
Journal ArticleDOI

Convergence Rates of Distributed Nesterov-Like Gradient Methods on Random Networks

TL;DR: In this paper, the authors consider distributed optimization in random networks where nodes cooperatively minimize the sum of their individual convex costs and propose accelerated distributed gradient methods that are resilient to link failures, computationally cheap, and improve convergence rates over other gradient methods.
Posted Content

Newton-like method with diagonal correction for distributed optimization

TL;DR: Distributed Quasi Newton (DQN) as discussed by the authors approximates the Hessian inverse by splitting the Hessians into its diagonal and off-diagonal parts, inverting the diagonal part, and approximating the inverse of the offdiagonal part through a weighted linear function.
References
More filters
Journal ArticleDOI

A survey on sensor networks

TL;DR: The current state of the art of sensor networks is captured in this article, where solutions are discussed under their related protocol stack layer sections.
Journal ArticleDOI

Consensus problems in networks of agents with switching topology and time-delays

TL;DR: A distinctive feature of this work is to address consensus problems for networks with directed information flow by establishing a direct connection between the algebraic connectivity of the network and the performance of a linear consensus protocol.
Journal ArticleDOI

Fast linear iterations for distributed averaging

TL;DR: This work considers the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes, and gives several extensions and variations on the basic problem.
Book

Fundamentals of adaptive filtering

Ali H. Sayed
TL;DR: This paper presents a meta-anatomy of Adaptive Filters, a system of filters and algorithms that automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing these filters.
Journal ArticleDOI

Distributed asynchronous deterministic and stochastic gradient optimization algorithms

TL;DR: A model for asynchronous distributed computation is presented and it is shown that natural asynchronous distributed versions of a large class of deterministic and stochastic gradient-like algorithms retain the desirable convergence properties of their centralized counterparts.
Related Papers (5)