scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Distributed Classification/Estimation Algorithm for Sensor Networks

23 Jan 2014-Siam Journal on Control and Optimization (Society for Industrial and Applied Mathematics)-Vol. 52, Iss: 1, pp 189-218
TL;DR: In this article, the problem of simultaneous classification and estimation of hidden parameters in a sensor network with communication constraints is addressed, and a cooperative iterative algorithm which copes with the communication constraints imposed by the network is proposed.
Abstract: In this paper, we address the problem of simultaneous classification and estimation of hidden parameters in a sensor network with communications constraints. In particular, we consider a network of noisy sensors which measure a common scalar unknown parameter. We assume that a fraction of the nodes represent faulty sensors, whose measurements are poorly reliable. The goal for each node is to simultaneously identify its class (faulty or nonfaulty) and estimate the common parameter. We propose a novel cooperative iterative algorithm which copes with the communication constraints imposed by the network and shows remarkable performance. Our main result is a rigorous proof of the convergence of the algorithm, under a fixed communication graph, and a characterization of the limit behavior as the network size goes to infinity. In particular, we prove that, in the limit when the number of sensors goes to infinity, the common unknown parameter is estimated with arbitrary small error, while the classification error...
Citations
More filters
Journal ArticleDOI
TL;DR: The protocol is completely distributed, with the exception of requiring all nodes to know the same upper bound U on the total number of nodes which is correct within a constant multiplicative factor.

17 citations

Journal ArticleDOI
TL;DR: A fully distributed and easily implementable approach to allow each DTN node to rapidly identify whether its sensors are producing faulty data and the dynamical behavior of the proposed algorithm is approximated by some continuous-time state equations, whose equilibrium is characterized.
Abstract: Propagation of faulty data is a critical issue. In case of Delay Tolerant Networks (DTN) in particular, the rare meeting events require that nodes are efficient in propagating only correct information. For that purpose, mechanisms to rapidly identify possible faulty nodes should be developed. Distributed faulty node detection has been addressed in the literature in the context of sensor and vehicular networks, but already proposed solutions suffer from long delays in identifying and isolating nodes producing faulty data. This is unsuitable to DTNs where nodes meet only rarely. This paper proposes a fully distributed and easily implementable approach to allow each DTN node to rapidly identify whether its sensors are producing faulty data. The dynamical behavior of the proposed algorithm is approximated by some continuous-time state equations, whose equilibrium is characterized. The presence of misbehaving nodes, trying to perturb the faulty node detection process, is also taken into account. Detection and false alarm rates are estimated by comparing both theoretical and simulation results. Numerical results assess the effectiveness of the proposed solution and can be used to give guidelines for the algorithm design.

16 citations

Journal ArticleDOI
TL;DR: This paper theoretically prove the convergence and characterize the limit points of AHT in regular networks under some proper assumptions on the functions to be minimized and proposes three methods that attempt to minimize the given function, promoting sparsity of the solution.
Abstract: Distributed optimization in multi-agent systems under sparsity constraints has recently received a lot of attention. In this paper, we consider the in-network minimization of a continuously differentiable nonlinear function which is a combination of local agent objective functions subject to sparsity constraints on the variables. A crucial issue of in-network optimization is the handling of the communications, which may be expensive. This calls for efficient algorithms, that are able to reduce the number of required communication links and transmitted messages. To this end, we focus on asynchronous and randomized distributed techniques. Based on consensus techniques and iterative hard thresholding methods, we propose three methods that attempt to minimize the given function, promoting sparsity of the solution: asynchronous hard thresholding (AHT), broadcast hard thresholding (BHT), and gossip hard thresholding (GHT). Although similar in many aspects, it is difficult to obtain a unified analysis for the proposed algorithms. Specifically, we theoretically prove the convergence and characterize the limit points of AHT in regular networks under some proper assumptions on the functions to be minimized. For BHT and GHT, instead, we characterize the fixed points of the maps that rule their dynamics in terms of stationary points of original problem. Finally, we illustrate the implementation of our techniques in compressed sensing and present several numerical results on performance and number of transmissions required for convergence.

14 citations

Journal ArticleDOI
TL;DR: Two robust and adaptive distributed hybrid classification algorithms are introduced that are designed in a way that they are applicable to on-line classification problems and are insensitive to outliers.
Abstract: Distributed adaptive signal processing and communication networking are rapidly advancing research areas which enable new and powerful signal processing tasks, e.g., distributed speech enhancement in adverse environments. An emerging new paradigm is that of multiple devices cooperating in multiple tasks (MDMT). This is different from the classical wireless sensor network (WSN) setup, in which multiple devices perform one single joint task. A crucial first step in order to achieve a benefit, e.g., a better node-specific audio signal enhancement, is the common unique labeling of all relevant sources that are observed by the network. This challenging research question can be addressed by designing adaptive data clustering and classification rules based on a set of noisy unlabeled sensor observations. In this paper, two robust and adaptive distributed hybrid classification algorithms are introduced. They consist of a local clustering phase that uses a small part of the data with a subsequent, fully distributed on-line classification phase. The classification is performed by means of distance-based similarity measures. In order to deal with the presence of outliers, the distances are estimated robustly. An extensive simulation-based performance analysis is provided for the proposed algorithms. The distributed hybrid classification approaches are compared to a benchmark algorithm where the error rates are evaluated in dependence of different WSN parameters. Communication cost and computation time are compared for all algorithms under test. Since both proposed approaches use robust estimators, they are, to a certain degree, insensitive to outliers. Furthermore, they are designed in a way that they are applicable to on-line classification problems.

12 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, a method for making successive experiments at levels x1, x2, ··· in such a way that xn will tend to θ in probability is presented.
Abstract: Let M(x) denote the expected value at level x of the response to a certain experiment. M(x) is assumed to be a monotone function of x but is unknown to the experimenter, and it is desired to find the solution x = θ of the equation M(x) = α, where a is a given constant. We give a method for making successive experiments at levels x1, x2, ··· in such a way that xn will tend to θ in probability.

9,312 citations

Book
01 Jan 1986
TL;DR: This course discusses Mathematical Aspects of Mixtures, Sequential Problems and Procedures, and Applications of Finite Mixture Models.
Abstract: Statistical Problems. Applications of Finite Mixture Models. Mathematical Aspects of Mixtures. Learning About the Parameters of a Mixture. Learning About the Components of a Mixture. Sequential Problems and Procedures.

3,464 citations

Journal ArticleDOI
TL;DR: This work discusses the formulation and theoretical and practical properties of the EM algorithm, a specialization to the mixture density context of a general algorithm used to approximate maximum-likelihood estimates for incomplete data problems.
Abstract: The problem of estimating the parameters which determine a mixture density has been the subject of a large, diverse body of literature spanning nearly ninety years. During the last two decades, the...

2,836 citations

Journal Article
TL;DR: In this paper, the authors describe the EM algorithm for finding the parameters of a mixture of Gaussian densities and a hidden Markov model (HMM) for both discrete and Gaussian mixture observation models.
Abstract: We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical rigor.

2,455 citations